Test Report: KVM_Linux_crio 19761

                    
                      b3514a663b846d20eab704dde0dd7737dbedcda0:2024-10-07:36539
                    
                

Test fail (19/222)

x
+
TestAddons/serial/GCPAuth/PullSecret (480.59s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-681605 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-681605 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [418202f3-6a6f-41d5-bdd6-50b1f855a708] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:627: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:627: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-681605 -n addons-681605
addons_test.go:627: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-07 10:32:36.646588503 +0000 UTC m=+658.829374514
addons_test.go:627: (dbg) Run:  kubectl --context addons-681605 describe po busybox -n default
addons_test.go:627: (dbg) kubectl --context addons-681605 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-681605/192.168.39.71
Start Time:       Mon, 07 Oct 2024 10:24:36 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.22
IPs:
IP:  10.244.0.22
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m74rn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-m74rn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/busybox to addons-681605
Normal   Pulling    6m35s (x4 over 8m)      kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m35s (x4 over 7m59s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning  Failed     6m35s (x4 over 7m59s)   kubelet            Error: ErrImagePull
Warning  Failed     6m22s (x6 over 7m59s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m54s (x21 over 7m59s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:627: (dbg) Run:  kubectl --context addons-681605 logs busybox -n default
addons_test.go:627: (dbg) Non-zero exit: kubectl --context addons-681605 logs busybox -n default: exit status 1 (68.898756ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:627: kubectl --context addons-681605 logs busybox -n default: exit status 1
addons_test.go:629: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.59s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (156.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-681605 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-681605 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-681605 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e7bbb5ed-1d5a-45fd-9651-7ca5a6f91cb3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e7bbb5ed-1d5a-45fd-9651-7ca5a6f91cb3] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.004225547s
I1007 10:33:25.133091   11096 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681605 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.811718925s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-681605 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.71
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-681605 -n addons-681605
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-681605 logs -n 25: (1.283950371s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC | 07 Oct 24 10:22 UTC |
	| delete  | -p download-only-484375                                                                     | download-only-484375 | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC | 07 Oct 24 10:22 UTC |
	| delete  | -p download-only-052891                                                                     | download-only-052891 | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC | 07 Oct 24 10:22 UTC |
	| delete  | -p download-only-484375                                                                     | download-only-484375 | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC | 07 Oct 24 10:22 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-079912 | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC |                     |
	|         | binary-mirror-079912                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43695                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-079912                                                                     | binary-mirror-079912 | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC | 07 Oct 24 10:22 UTC |
	| addons  | disable dashboard -p                                                                        | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC |                     |
	|         | addons-681605                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC |                     |
	|         | addons-681605                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-681605 --wait=true                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC | 07 Oct 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:24 UTC | 07 Oct 24 10:24 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:32 UTC | 07 Oct 24 10:32 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:32 UTC | 07 Oct 24 10:32 UTC |
	|         | -p addons-681605                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:32 UTC | 07 Oct 24 10:32 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-681605 ip                                                                            | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-681605 addons                                                                        | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-681605 addons                                                                        | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	|         | -p addons-681605                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-681605 ssh cat                                                                       | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	|         | /opt/local-path-provisioner/pvc-44bb06b3-65c8-40a0-8efe-d6acb8e8851b_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:34 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-681605 ssh curl -s                                                                   | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-681605 addons                                                                        | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:34 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-681605 addons                                                                        | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-681605 ip                                                                            | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:35 UTC | 07 Oct 24 10:35 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:22:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:22:20.006721   11818 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:22:20.006838   11818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:22:20.006847   11818 out.go:358] Setting ErrFile to fd 2...
	I1007 10:22:20.006851   11818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:22:20.007049   11818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:22:20.007635   11818 out.go:352] Setting JSON to false
	I1007 10:22:20.008459   11818 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":234,"bootTime":1728296306,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:22:20.008564   11818 start.go:139] virtualization: kvm guest
	I1007 10:22:20.011046   11818 out.go:177] * [addons-681605] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:22:20.012623   11818 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:22:20.012645   11818 notify.go:220] Checking for updates...
	I1007 10:22:20.014995   11818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:22:20.016096   11818 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:22:20.017313   11818 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:22:20.018441   11818 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:22:20.019630   11818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:22:20.020888   11818 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:22:20.053491   11818 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 10:22:20.054760   11818 start.go:297] selected driver: kvm2
	I1007 10:22:20.054777   11818 start.go:901] validating driver "kvm2" against <nil>
	I1007 10:22:20.054789   11818 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:22:20.055478   11818 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:22:20.055566   11818 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:22:20.070619   11818 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:22:20.070666   11818 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 10:22:20.070904   11818 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:22:20.070935   11818 cni.go:84] Creating CNI manager for ""
	I1007 10:22:20.070975   11818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 10:22:20.070983   11818 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 10:22:20.071031   11818 start.go:340] cluster config:
	{Name:addons-681605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-681605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:22:20.071115   11818 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:22:20.072814   11818 out.go:177] * Starting "addons-681605" primary control-plane node in "addons-681605" cluster
	I1007 10:22:20.074390   11818 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:22:20.074448   11818 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 10:22:20.074461   11818 cache.go:56] Caching tarball of preloaded images
	I1007 10:22:20.074567   11818 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:22:20.074584   11818 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:22:20.074907   11818 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/config.json ...
	I1007 10:22:20.074934   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/config.json: {Name:mk0a3fe40c14a0f70ab6963b6c11a89bec5f8a19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:20.075130   11818 start.go:360] acquireMachinesLock for addons-681605: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:22:20.075199   11818 start.go:364] duration metric: took 48.355µs to acquireMachinesLock for "addons-681605"
	I1007 10:22:20.075227   11818 start.go:93] Provisioning new machine with config: &{Name:addons-681605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-681605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:22:20.075296   11818 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 10:22:20.077005   11818 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1007 10:22:20.077180   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:22:20.077240   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:22:20.091805   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I1007 10:22:20.092326   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:22:20.092891   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:22:20.092913   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:22:20.093244   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:22:20.093427   11818 main.go:141] libmachine: (addons-681605) Calling .GetMachineName
	I1007 10:22:20.093589   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:20.093724   11818 start.go:159] libmachine.API.Create for "addons-681605" (driver="kvm2")
	I1007 10:22:20.093755   11818 client.go:168] LocalClient.Create starting
	I1007 10:22:20.093789   11818 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:22:20.210324   11818 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:22:20.286119   11818 main.go:141] libmachine: Running pre-create checks...
	I1007 10:22:20.286143   11818 main.go:141] libmachine: (addons-681605) Calling .PreCreateCheck
	I1007 10:22:20.286613   11818 main.go:141] libmachine: (addons-681605) Calling .GetConfigRaw
	I1007 10:22:20.287099   11818 main.go:141] libmachine: Creating machine...
	I1007 10:22:20.287112   11818 main.go:141] libmachine: (addons-681605) Calling .Create
	I1007 10:22:20.287294   11818 main.go:141] libmachine: (addons-681605) Creating KVM machine...
	I1007 10:22:20.288556   11818 main.go:141] libmachine: (addons-681605) DBG | found existing default KVM network
	I1007 10:22:20.289274   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:20.289137   11840 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091f0}
	I1007 10:22:20.289308   11818 main.go:141] libmachine: (addons-681605) DBG | created network xml: 
	I1007 10:22:20.289327   11818 main.go:141] libmachine: (addons-681605) DBG | <network>
	I1007 10:22:20.289334   11818 main.go:141] libmachine: (addons-681605) DBG |   <name>mk-addons-681605</name>
	I1007 10:22:20.289339   11818 main.go:141] libmachine: (addons-681605) DBG |   <dns enable='no'/>
	I1007 10:22:20.289376   11818 main.go:141] libmachine: (addons-681605) DBG |   
	I1007 10:22:20.289406   11818 main.go:141] libmachine: (addons-681605) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 10:22:20.289430   11818 main.go:141] libmachine: (addons-681605) DBG |     <dhcp>
	I1007 10:22:20.289439   11818 main.go:141] libmachine: (addons-681605) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 10:22:20.289449   11818 main.go:141] libmachine: (addons-681605) DBG |     </dhcp>
	I1007 10:22:20.289456   11818 main.go:141] libmachine: (addons-681605) DBG |   </ip>
	I1007 10:22:20.289464   11818 main.go:141] libmachine: (addons-681605) DBG |   
	I1007 10:22:20.289470   11818 main.go:141] libmachine: (addons-681605) DBG | </network>
	I1007 10:22:20.289521   11818 main.go:141] libmachine: (addons-681605) DBG | 
	I1007 10:22:20.295027   11818 main.go:141] libmachine: (addons-681605) DBG | trying to create private KVM network mk-addons-681605 192.168.39.0/24...
	I1007 10:22:20.363665   11818 main.go:141] libmachine: (addons-681605) DBG | private KVM network mk-addons-681605 192.168.39.0/24 created
	I1007 10:22:20.363729   11818 main.go:141] libmachine: (addons-681605) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605 ...
	I1007 10:22:20.363756   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:20.363665   11840 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:22:20.363775   11818 main.go:141] libmachine: (addons-681605) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:22:20.363820   11818 main.go:141] libmachine: (addons-681605) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:22:20.622626   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:20.622453   11840 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa...
	I1007 10:22:20.764745   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:20.764586   11840 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/addons-681605.rawdisk...
	I1007 10:22:20.764774   11818 main.go:141] libmachine: (addons-681605) DBG | Writing magic tar header
	I1007 10:22:20.764788   11818 main.go:141] libmachine: (addons-681605) DBG | Writing SSH key tar header
	I1007 10:22:20.764800   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:20.764705   11840 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605 ...
	I1007 10:22:20.764812   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605
	I1007 10:22:20.764884   11818 main.go:141] libmachine: (addons-681605) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605 (perms=drwx------)
	I1007 10:22:20.764913   11818 main.go:141] libmachine: (addons-681605) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:22:20.764926   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:22:20.764940   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:22:20.764949   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:22:20.764958   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:22:20.764966   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:22:20.764975   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home
	I1007 10:22:20.764985   11818 main.go:141] libmachine: (addons-681605) DBG | Skipping /home - not owner
	I1007 10:22:20.765042   11818 main.go:141] libmachine: (addons-681605) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:22:20.765071   11818 main.go:141] libmachine: (addons-681605) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:22:20.765081   11818 main.go:141] libmachine: (addons-681605) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:22:20.765088   11818 main.go:141] libmachine: (addons-681605) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:22:20.765098   11818 main.go:141] libmachine: (addons-681605) Creating domain...
	I1007 10:22:20.766018   11818 main.go:141] libmachine: (addons-681605) define libvirt domain using xml: 
	I1007 10:22:20.766050   11818 main.go:141] libmachine: (addons-681605) <domain type='kvm'>
	I1007 10:22:20.766059   11818 main.go:141] libmachine: (addons-681605)   <name>addons-681605</name>
	I1007 10:22:20.766071   11818 main.go:141] libmachine: (addons-681605)   <memory unit='MiB'>4000</memory>
	I1007 10:22:20.766082   11818 main.go:141] libmachine: (addons-681605)   <vcpu>2</vcpu>
	I1007 10:22:20.766089   11818 main.go:141] libmachine: (addons-681605)   <features>
	I1007 10:22:20.766097   11818 main.go:141] libmachine: (addons-681605)     <acpi/>
	I1007 10:22:20.766107   11818 main.go:141] libmachine: (addons-681605)     <apic/>
	I1007 10:22:20.766118   11818 main.go:141] libmachine: (addons-681605)     <pae/>
	I1007 10:22:20.766127   11818 main.go:141] libmachine: (addons-681605)     
	I1007 10:22:20.766137   11818 main.go:141] libmachine: (addons-681605)   </features>
	I1007 10:22:20.766152   11818 main.go:141] libmachine: (addons-681605)   <cpu mode='host-passthrough'>
	I1007 10:22:20.766163   11818 main.go:141] libmachine: (addons-681605)   
	I1007 10:22:20.766179   11818 main.go:141] libmachine: (addons-681605)   </cpu>
	I1007 10:22:20.766190   11818 main.go:141] libmachine: (addons-681605)   <os>
	I1007 10:22:20.766201   11818 main.go:141] libmachine: (addons-681605)     <type>hvm</type>
	I1007 10:22:20.766213   11818 main.go:141] libmachine: (addons-681605)     <boot dev='cdrom'/>
	I1007 10:22:20.766227   11818 main.go:141] libmachine: (addons-681605)     <boot dev='hd'/>
	I1007 10:22:20.766239   11818 main.go:141] libmachine: (addons-681605)     <bootmenu enable='no'/>
	I1007 10:22:20.766247   11818 main.go:141] libmachine: (addons-681605)   </os>
	I1007 10:22:20.766255   11818 main.go:141] libmachine: (addons-681605)   <devices>
	I1007 10:22:20.766262   11818 main.go:141] libmachine: (addons-681605)     <disk type='file' device='cdrom'>
	I1007 10:22:20.766289   11818 main.go:141] libmachine: (addons-681605)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/boot2docker.iso'/>
	I1007 10:22:20.766305   11818 main.go:141] libmachine: (addons-681605)       <target dev='hdc' bus='scsi'/>
	I1007 10:22:20.766314   11818 main.go:141] libmachine: (addons-681605)       <readonly/>
	I1007 10:22:20.766324   11818 main.go:141] libmachine: (addons-681605)     </disk>
	I1007 10:22:20.766335   11818 main.go:141] libmachine: (addons-681605)     <disk type='file' device='disk'>
	I1007 10:22:20.766348   11818 main.go:141] libmachine: (addons-681605)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:22:20.766363   11818 main.go:141] libmachine: (addons-681605)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/addons-681605.rawdisk'/>
	I1007 10:22:20.766377   11818 main.go:141] libmachine: (addons-681605)       <target dev='hda' bus='virtio'/>
	I1007 10:22:20.766388   11818 main.go:141] libmachine: (addons-681605)     </disk>
	I1007 10:22:20.766397   11818 main.go:141] libmachine: (addons-681605)     <interface type='network'>
	I1007 10:22:20.766410   11818 main.go:141] libmachine: (addons-681605)       <source network='mk-addons-681605'/>
	I1007 10:22:20.766420   11818 main.go:141] libmachine: (addons-681605)       <model type='virtio'/>
	I1007 10:22:20.766431   11818 main.go:141] libmachine: (addons-681605)     </interface>
	I1007 10:22:20.766444   11818 main.go:141] libmachine: (addons-681605)     <interface type='network'>
	I1007 10:22:20.766472   11818 main.go:141] libmachine: (addons-681605)       <source network='default'/>
	I1007 10:22:20.766491   11818 main.go:141] libmachine: (addons-681605)       <model type='virtio'/>
	I1007 10:22:20.766497   11818 main.go:141] libmachine: (addons-681605)     </interface>
	I1007 10:22:20.766514   11818 main.go:141] libmachine: (addons-681605)     <serial type='pty'>
	I1007 10:22:20.766522   11818 main.go:141] libmachine: (addons-681605)       <target port='0'/>
	I1007 10:22:20.766527   11818 main.go:141] libmachine: (addons-681605)     </serial>
	I1007 10:22:20.766534   11818 main.go:141] libmachine: (addons-681605)     <console type='pty'>
	I1007 10:22:20.766543   11818 main.go:141] libmachine: (addons-681605)       <target type='serial' port='0'/>
	I1007 10:22:20.766570   11818 main.go:141] libmachine: (addons-681605)     </console>
	I1007 10:22:20.766598   11818 main.go:141] libmachine: (addons-681605)     <rng model='virtio'>
	I1007 10:22:20.766613   11818 main.go:141] libmachine: (addons-681605)       <backend model='random'>/dev/random</backend>
	I1007 10:22:20.766620   11818 main.go:141] libmachine: (addons-681605)     </rng>
	I1007 10:22:20.766631   11818 main.go:141] libmachine: (addons-681605)     
	I1007 10:22:20.766639   11818 main.go:141] libmachine: (addons-681605)     
	I1007 10:22:20.766647   11818 main.go:141] libmachine: (addons-681605)   </devices>
	I1007 10:22:20.766655   11818 main.go:141] libmachine: (addons-681605) </domain>
	I1007 10:22:20.766663   11818 main.go:141] libmachine: (addons-681605) 
	I1007 10:22:20.772053   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a4:6e:99 in network default
	I1007 10:22:20.772584   11818 main.go:141] libmachine: (addons-681605) Ensuring networks are active...
	I1007 10:22:20.772605   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:20.773248   11818 main.go:141] libmachine: (addons-681605) Ensuring network default is active
	I1007 10:22:20.773530   11818 main.go:141] libmachine: (addons-681605) Ensuring network mk-addons-681605 is active
	I1007 10:22:20.773993   11818 main.go:141] libmachine: (addons-681605) Getting domain xml...
	I1007 10:22:20.774760   11818 main.go:141] libmachine: (addons-681605) Creating domain...
	I1007 10:22:22.161804   11818 main.go:141] libmachine: (addons-681605) Waiting to get IP...
	I1007 10:22:22.162554   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:22.162953   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:22.162994   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:22.162948   11840 retry.go:31] will retry after 302.185888ms: waiting for machine to come up
	I1007 10:22:22.466345   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:22.466811   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:22.466833   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:22.466769   11840 retry.go:31] will retry after 257.765553ms: waiting for machine to come up
	I1007 10:22:22.726158   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:22.726616   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:22.726647   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:22.726566   11840 retry.go:31] will retry after 409.131874ms: waiting for machine to come up
	I1007 10:22:23.137044   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:23.137411   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:23.137440   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:23.137398   11840 retry.go:31] will retry after 377.38954ms: waiting for machine to come up
	I1007 10:22:23.515929   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:23.516346   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:23.516381   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:23.516321   11840 retry.go:31] will retry after 503.053943ms: waiting for machine to come up
	I1007 10:22:24.020917   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:24.021331   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:24.021366   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:24.021289   11840 retry.go:31] will retry after 585.883351ms: waiting for machine to come up
	I1007 10:22:24.609003   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:24.609485   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:24.609509   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:24.609415   11840 retry.go:31] will retry after 975.976889ms: waiting for machine to come up
	I1007 10:22:25.587029   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:25.587445   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:25.587485   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:25.587380   11840 retry.go:31] will retry after 1.250631484s: waiting for machine to come up
	I1007 10:22:26.839409   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:26.839855   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:26.839884   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:26.839812   11840 retry.go:31] will retry after 1.518594311s: waiting for machine to come up
	I1007 10:22:28.360337   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:28.360732   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:28.360756   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:28.360671   11840 retry.go:31] will retry after 1.758664231s: waiting for machine to come up
	I1007 10:22:30.121081   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:30.121532   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:30.121562   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:30.121481   11840 retry.go:31] will retry after 1.798470244s: waiting for machine to come up
	I1007 10:22:31.922286   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:31.922746   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:31.922775   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:31.922711   11840 retry.go:31] will retry after 2.965673146s: waiting for machine to come up
	I1007 10:22:34.889581   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:34.889974   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:34.890009   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:34.889924   11840 retry.go:31] will retry after 3.598608124s: waiting for machine to come up
	I1007 10:22:38.490108   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:38.490436   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:38.490457   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:38.490398   11840 retry.go:31] will retry after 4.481598971s: waiting for machine to come up
	I1007 10:22:42.975128   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:42.975616   11818 main.go:141] libmachine: (addons-681605) Found IP for machine: 192.168.39.71
	I1007 10:22:42.975639   11818 main.go:141] libmachine: (addons-681605) Reserving static IP address...
	I1007 10:22:42.975675   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has current primary IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:42.975958   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find host DHCP lease matching {name: "addons-681605", mac: "52:54:00:a3:aa:32", ip: "192.168.39.71"} in network mk-addons-681605
	I1007 10:22:43.049700   11818 main.go:141] libmachine: (addons-681605) DBG | Getting to WaitForSSH function...
	I1007 10:22:43.049728   11818 main.go:141] libmachine: (addons-681605) Reserved static IP address: 192.168.39.71
	I1007 10:22:43.049740   11818 main.go:141] libmachine: (addons-681605) Waiting for SSH to be available...
	I1007 10:22:43.052685   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.053145   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.053192   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.053297   11818 main.go:141] libmachine: (addons-681605) DBG | Using SSH client type: external
	I1007 10:22:43.053332   11818 main.go:141] libmachine: (addons-681605) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa (-rw-------)
	I1007 10:22:43.053370   11818 main.go:141] libmachine: (addons-681605) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:22:43.053388   11818 main.go:141] libmachine: (addons-681605) DBG | About to run SSH command:
	I1007 10:22:43.053399   11818 main.go:141] libmachine: (addons-681605) DBG | exit 0
	I1007 10:22:43.184086   11818 main.go:141] libmachine: (addons-681605) DBG | SSH cmd err, output: <nil>: 
	I1007 10:22:43.184381   11818 main.go:141] libmachine: (addons-681605) KVM machine creation complete!
	I1007 10:22:43.184746   11818 main.go:141] libmachine: (addons-681605) Calling .GetConfigRaw
	I1007 10:22:43.185320   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:43.185500   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:43.185632   11818 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:22:43.185647   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:22:43.186766   11818 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:22:43.186781   11818 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:22:43.186786   11818 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:22:43.186791   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.188950   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.189290   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.189318   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.189422   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:43.189608   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.189739   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.189900   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:43.190041   11818 main.go:141] libmachine: Using SSH client type: native
	I1007 10:22:43.190236   11818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1007 10:22:43.190251   11818 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:22:43.291777   11818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:22:43.291801   11818 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:22:43.291812   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.294213   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.294537   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.294562   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.294772   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:43.294949   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.295175   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.295301   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:43.295478   11818 main.go:141] libmachine: Using SSH client type: native
	I1007 10:22:43.295718   11818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1007 10:22:43.295733   11818 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:22:43.396977   11818 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:22:43.397053   11818 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:22:43.397073   11818 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:22:43.397090   11818 main.go:141] libmachine: (addons-681605) Calling .GetMachineName
	I1007 10:22:43.397361   11818 buildroot.go:166] provisioning hostname "addons-681605"
	I1007 10:22:43.397384   11818 main.go:141] libmachine: (addons-681605) Calling .GetMachineName
	I1007 10:22:43.397588   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.400281   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.400645   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.400671   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.400867   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:43.401066   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.401271   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.401411   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:43.401588   11818 main.go:141] libmachine: Using SSH client type: native
	I1007 10:22:43.401758   11818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1007 10:22:43.401771   11818 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-681605 && echo "addons-681605" | sudo tee /etc/hostname
	I1007 10:22:43.521706   11818 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-681605
	
	I1007 10:22:43.521739   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.524322   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.524627   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.524654   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.524789   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:43.524995   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.525178   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.525325   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:43.525481   11818 main.go:141] libmachine: Using SSH client type: native
	I1007 10:22:43.525650   11818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1007 10:22:43.525669   11818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-681605' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-681605/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-681605' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:22:43.637022   11818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:22:43.637049   11818 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:22:43.637098   11818 buildroot.go:174] setting up certificates
	I1007 10:22:43.637111   11818 provision.go:84] configureAuth start
	I1007 10:22:43.637127   11818 main.go:141] libmachine: (addons-681605) Calling .GetMachineName
	I1007 10:22:43.637381   11818 main.go:141] libmachine: (addons-681605) Calling .GetIP
	I1007 10:22:43.639967   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.640306   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.640332   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.640432   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.642670   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.643036   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.643069   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.643237   11818 provision.go:143] copyHostCerts
	I1007 10:22:43.643311   11818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:22:43.643472   11818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:22:43.643563   11818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:22:43.643638   11818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.addons-681605 san=[127.0.0.1 192.168.39.71 addons-681605 localhost minikube]
	I1007 10:22:43.750599   11818 provision.go:177] copyRemoteCerts
	I1007 10:22:43.750651   11818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:22:43.750673   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.753388   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.753808   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.753836   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.754050   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:43.754243   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.754393   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:43.754507   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:22:43.834400   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:22:43.859950   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 10:22:43.885070   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 10:22:43.910338   11818 provision.go:87] duration metric: took 273.202528ms to configureAuth
	I1007 10:22:43.910370   11818 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:22:43.910568   11818 config.go:182] Loaded profile config "addons-681605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:22:43.910650   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.913827   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.914108   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.914135   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.914369   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:43.914539   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.914730   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.914830   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:43.914939   11818 main.go:141] libmachine: Using SSH client type: native
	I1007 10:22:43.915116   11818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1007 10:22:43.915136   11818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:22:44.135204   11818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:22:44.135231   11818 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:22:44.135241   11818 main.go:141] libmachine: (addons-681605) Calling .GetURL
	I1007 10:22:44.136402   11818 main.go:141] libmachine: (addons-681605) DBG | Using libvirt version 6000000
	I1007 10:22:44.138224   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.138526   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.138552   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.138724   11818 main.go:141] libmachine: Docker is up and running!
	I1007 10:22:44.138738   11818 main.go:141] libmachine: Reticulating splines...
	I1007 10:22:44.138746   11818 client.go:171] duration metric: took 24.044984593s to LocalClient.Create
	I1007 10:22:44.138771   11818 start.go:167] duration metric: took 24.045045516s to libmachine.API.Create "addons-681605"
	I1007 10:22:44.138792   11818 start.go:293] postStartSetup for "addons-681605" (driver="kvm2")
	I1007 10:22:44.138808   11818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:22:44.138831   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:44.139042   11818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:22:44.139065   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:44.141175   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.141471   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.141493   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.141610   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:44.141779   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:44.141924   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:44.142041   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:22:44.224277   11818 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:22:44.228883   11818 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:22:44.228913   11818 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:22:44.228995   11818 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:22:44.229023   11818 start.go:296] duration metric: took 90.223432ms for postStartSetup
	I1007 10:22:44.229054   11818 main.go:141] libmachine: (addons-681605) Calling .GetConfigRaw
	I1007 10:22:44.229607   11818 main.go:141] libmachine: (addons-681605) Calling .GetIP
	I1007 10:22:44.232055   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.232454   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.232483   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.232687   11818 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/config.json ...
	I1007 10:22:44.232868   11818 start.go:128] duration metric: took 24.157562052s to createHost
	I1007 10:22:44.232893   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:44.234840   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.235159   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.235183   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.235319   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:44.235458   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:44.235571   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:44.235708   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:44.235867   11818 main.go:141] libmachine: Using SSH client type: native
	I1007 10:22:44.236060   11818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1007 10:22:44.236072   11818 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:22:44.336691   11818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728296564.311086132
	
	I1007 10:22:44.336715   11818 fix.go:216] guest clock: 1728296564.311086132
	I1007 10:22:44.336722   11818 fix.go:229] Guest: 2024-10-07 10:22:44.311086132 +0000 UTC Remote: 2024-10-07 10:22:44.232882006 +0000 UTC m=+24.261967860 (delta=78.204126ms)
	I1007 10:22:44.336760   11818 fix.go:200] guest clock delta is within tolerance: 78.204126ms
	I1007 10:22:44.336768   11818 start.go:83] releasing machines lock for "addons-681605", held for 24.261553295s
	I1007 10:22:44.336791   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:44.337047   11818 main.go:141] libmachine: (addons-681605) Calling .GetIP
	I1007 10:22:44.339485   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.339938   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.339963   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.340129   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:44.340672   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:44.340819   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:44.340920   11818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:22:44.340973   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:44.341028   11818 ssh_runner.go:195] Run: cat /version.json
	I1007 10:22:44.341049   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:44.343366   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.343592   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.343627   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.343728   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.343760   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:44.343950   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:44.344119   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.344128   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:44.344146   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.344313   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:44.344318   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:22:44.344437   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:44.344569   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:44.344700   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:22:44.417367   11818 ssh_runner.go:195] Run: systemctl --version
	I1007 10:22:44.443721   11818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:22:44.606811   11818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:22:44.613290   11818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:22:44.613349   11818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:22:44.629602   11818 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:22:44.629633   11818 start.go:495] detecting cgroup driver to use...
	I1007 10:22:44.629695   11818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:22:44.646010   11818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:22:44.661078   11818 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:22:44.661140   11818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:22:44.675927   11818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:22:44.690323   11818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:22:44.802885   11818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:22:44.951049   11818 docker.go:233] disabling docker service ...
	I1007 10:22:44.951110   11818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:22:44.966695   11818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:22:44.980644   11818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:22:45.114859   11818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:22:45.237145   11818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:22:45.251806   11818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:22:45.271887   11818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:22:45.271957   11818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.283594   11818 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:22:45.283669   11818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.294919   11818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.306479   11818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.318053   11818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:22:45.329238   11818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.340723   11818 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.358754   11818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.369559   11818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:22:45.381007   11818 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:22:45.381085   11818 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:22:45.395053   11818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:22:45.405374   11818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:22:45.515409   11818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:22:45.607675   11818 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:22:45.607770   11818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:22:45.612759   11818 start.go:563] Will wait 60s for crictl version
	I1007 10:22:45.612835   11818 ssh_runner.go:195] Run: which crictl
	I1007 10:22:45.616514   11818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:22:45.655593   11818 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:22:45.655737   11818 ssh_runner.go:195] Run: crio --version
	I1007 10:22:45.685092   11818 ssh_runner.go:195] Run: crio --version
	I1007 10:22:45.716243   11818 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:22:45.717528   11818 main.go:141] libmachine: (addons-681605) Calling .GetIP
	I1007 10:22:45.720057   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:45.720336   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:45.720359   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:45.720579   11818 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:22:45.724783   11818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:22:45.737582   11818 kubeadm.go:883] updating cluster {Name:addons-681605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-681605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:22:45.737732   11818 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:22:45.737793   11818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:22:45.770595   11818 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 10:22:45.770674   11818 ssh_runner.go:195] Run: which lz4
	I1007 10:22:45.774750   11818 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 10:22:45.778965   11818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 10:22:45.779003   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 10:22:47.101349   11818 crio.go:462] duration metric: took 1.326625678s to copy over tarball
	I1007 10:22:47.101414   11818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 10:22:49.233715   11818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132271512s)
	I1007 10:22:49.233743   11818 crio.go:469] duration metric: took 2.132367893s to extract the tarball
	I1007 10:22:49.233752   11818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 10:22:49.272079   11818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:22:49.314282   11818 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:22:49.314305   11818 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:22:49.314312   11818 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.31.1 crio true true} ...
	I1007 10:22:49.314403   11818 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-681605 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-681605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:22:49.314471   11818 ssh_runner.go:195] Run: crio config
	I1007 10:22:49.359239   11818 cni.go:84] Creating CNI manager for ""
	I1007 10:22:49.359269   11818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 10:22:49.359280   11818 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:22:49.359301   11818 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-681605 NodeName:addons-681605 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:22:49.359452   11818 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-681605"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:22:49.359516   11818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:22:49.369815   11818 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:22:49.369880   11818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 10:22:49.379933   11818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 10:22:49.397084   11818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:22:49.414465   11818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1007 10:22:49.432657   11818 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I1007 10:22:49.436730   11818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:22:49.449941   11818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:22:49.581215   11818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:22:49.600064   11818 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605 for IP: 192.168.39.71
	I1007 10:22:49.600085   11818 certs.go:194] generating shared ca certs ...
	I1007 10:22:49.600101   11818 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:49.600256   11818 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:22:49.685062   11818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt ...
	I1007 10:22:49.685088   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt: {Name:mk1bebb0d608c2502f725269f89a728785649358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:49.685273   11818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key ...
	I1007 10:22:49.685287   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key: {Name:mk0484bf94e36afd146e1707e22e8856544b1d70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:49.685387   11818 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:22:49.937366   11818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt ...
	I1007 10:22:49.937416   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt: {Name:mkf3ac5044e36edbadc1cf9a4d070f939dedff0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:49.937594   11818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key ...
	I1007 10:22:49.937607   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key: {Name:mk6c83283b65147b1395a3e37054954c48d7f3ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:49.937698   11818 certs.go:256] generating profile certs ...
	I1007 10:22:49.937751   11818 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.key
	I1007 10:22:49.937765   11818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt with IP's: []
	I1007 10:22:50.097103   11818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt ...
	I1007 10:22:50.097132   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: {Name:mkaa706578292541c6064467734dda876cf7cce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:50.097290   11818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.key ...
	I1007 10:22:50.097300   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.key: {Name:mkf1fd7ffeb3ae97b2b345e6f9af0a37e79b50e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:50.097366   11818 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.key.e594ed4a
	I1007 10:22:50.097382   11818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.crt.e594ed4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.71]
	I1007 10:22:50.161850   11818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.crt.e594ed4a ...
	I1007 10:22:50.161876   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.crt.e594ed4a: {Name:mk71e3521cc2b54e782a3ffce378308ee1bc4559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:50.162064   11818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.key.e594ed4a ...
	I1007 10:22:50.162078   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.key.e594ed4a: {Name:mk97fd44afa94c167f3de4d0934f7fdfaeb7ebe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:50.162167   11818 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.crt.e594ed4a -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.crt
	I1007 10:22:50.162260   11818 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.key.e594ed4a -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.key
	I1007 10:22:50.162309   11818 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.key
	I1007 10:22:50.162326   11818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.crt with IP's: []
	I1007 10:22:50.260625   11818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.crt ...
	I1007 10:22:50.260655   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.crt: {Name:mkb3a910d79c1560c2afe1e9f4d499332cc60ecf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:50.260828   11818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.key ...
	I1007 10:22:50.260841   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.key: {Name:mk69bfb1e2fe73bdc6a9a3af51018d17128bc8b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:50.261044   11818 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:22:50.261083   11818 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:22:50.261106   11818 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:22:50.261132   11818 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:22:50.261751   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:22:50.290469   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:22:50.314694   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:22:50.349437   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:22:50.375136   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 10:22:50.400642   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:22:50.425015   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:22:50.449922   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:22:50.474970   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:22:50.500011   11818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:22:50.517368   11818 ssh_runner.go:195] Run: openssl version
	I1007 10:22:50.523142   11818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:22:50.534384   11818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:22:50.539014   11818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:22:50.539071   11818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:22:50.544846   11818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:22:50.555696   11818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:22:50.559827   11818 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:22:50.559876   11818 kubeadm.go:392] StartCluster: {Name:addons-681605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-681605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:22:50.559954   11818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:22:50.560034   11818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:22:50.596468   11818 cri.go:89] found id: ""
	I1007 10:22:50.596537   11818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 10:22:50.606334   11818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 10:22:50.619572   11818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 10:22:50.630860   11818 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 10:22:50.630885   11818 kubeadm.go:157] found existing configuration files:
	
	I1007 10:22:50.630937   11818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 10:22:50.640489   11818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 10:22:50.640583   11818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 10:22:50.651440   11818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 10:22:50.660909   11818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 10:22:50.660973   11818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 10:22:50.671386   11818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 10:22:50.680417   11818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 10:22:50.680474   11818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 10:22:50.689585   11818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 10:22:50.698694   11818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 10:22:50.698751   11818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 10:22:50.708077   11818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 10:22:50.756961   11818 kubeadm.go:310] W1007 10:22:50.738794     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:22:50.758578   11818 kubeadm.go:310] W1007 10:22:50.740773     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:22:50.861175   11818 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 10:23:01.504310   11818 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 10:23:01.504429   11818 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 10:23:01.504532   11818 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 10:23:01.504655   11818 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 10:23:01.504807   11818 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 10:23:01.504906   11818 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 10:23:01.506628   11818 out.go:235]   - Generating certificates and keys ...
	I1007 10:23:01.506732   11818 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 10:23:01.506829   11818 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 10:23:01.506930   11818 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 10:23:01.507012   11818 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 10:23:01.507090   11818 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 10:23:01.507158   11818 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 10:23:01.507229   11818 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 10:23:01.507404   11818 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-681605 localhost] and IPs [192.168.39.71 127.0.0.1 ::1]
	I1007 10:23:01.507481   11818 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 10:23:01.507655   11818 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-681605 localhost] and IPs [192.168.39.71 127.0.0.1 ::1]
	I1007 10:23:01.507748   11818 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 10:23:01.507839   11818 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 10:23:01.507904   11818 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 10:23:01.508000   11818 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 10:23:01.508050   11818 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 10:23:01.508131   11818 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 10:23:01.508210   11818 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 10:23:01.508299   11818 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 10:23:01.508374   11818 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 10:23:01.508462   11818 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 10:23:01.508561   11818 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 10:23:01.510171   11818 out.go:235]   - Booting up control plane ...
	I1007 10:23:01.510253   11818 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 10:23:01.510322   11818 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 10:23:01.510387   11818 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 10:23:01.510474   11818 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 10:23:01.510554   11818 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 10:23:01.510601   11818 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 10:23:01.510771   11818 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 10:23:01.510918   11818 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 10:23:01.511004   11818 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001994563s
	I1007 10:23:01.511095   11818 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 10:23:01.511177   11818 kubeadm.go:310] [api-check] The API server is healthy after 5.503427981s
	I1007 10:23:01.511323   11818 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 10:23:01.511437   11818 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 10:23:01.511506   11818 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 10:23:01.511663   11818 kubeadm.go:310] [mark-control-plane] Marking the node addons-681605 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 10:23:01.511711   11818 kubeadm.go:310] [bootstrap-token] Using token: ci493c.491qyxmhvgz2m1ga
	I1007 10:23:01.513203   11818 out.go:235]   - Configuring RBAC rules ...
	I1007 10:23:01.513316   11818 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 10:23:01.513390   11818 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 10:23:01.513515   11818 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 10:23:01.513622   11818 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 10:23:01.513734   11818 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 10:23:01.513814   11818 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 10:23:01.513915   11818 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 10:23:01.513952   11818 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 10:23:01.513990   11818 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 10:23:01.513996   11818 kubeadm.go:310] 
	I1007 10:23:01.514048   11818 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 10:23:01.514054   11818 kubeadm.go:310] 
	I1007 10:23:01.514140   11818 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 10:23:01.514148   11818 kubeadm.go:310] 
	I1007 10:23:01.514169   11818 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 10:23:01.514220   11818 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 10:23:01.514271   11818 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 10:23:01.514282   11818 kubeadm.go:310] 
	I1007 10:23:01.514331   11818 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 10:23:01.514337   11818 kubeadm.go:310] 
	I1007 10:23:01.514376   11818 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 10:23:01.514388   11818 kubeadm.go:310] 
	I1007 10:23:01.514433   11818 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 10:23:01.514499   11818 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 10:23:01.514561   11818 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 10:23:01.514566   11818 kubeadm.go:310] 
	I1007 10:23:01.514674   11818 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 10:23:01.514786   11818 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 10:23:01.514796   11818 kubeadm.go:310] 
	I1007 10:23:01.514898   11818 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ci493c.491qyxmhvgz2m1ga \
	I1007 10:23:01.515041   11818 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df \
	I1007 10:23:01.515066   11818 kubeadm.go:310] 	--control-plane 
	I1007 10:23:01.515070   11818 kubeadm.go:310] 
	I1007 10:23:01.515143   11818 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 10:23:01.515149   11818 kubeadm.go:310] 
	I1007 10:23:01.515241   11818 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ci493c.491qyxmhvgz2m1ga \
	I1007 10:23:01.515394   11818 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df 
	I1007 10:23:01.515407   11818 cni.go:84] Creating CNI manager for ""
	I1007 10:23:01.515419   11818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 10:23:01.517265   11818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 10:23:01.518443   11818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 10:23:01.537745   11818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 10:23:01.556142   11818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 10:23:01.556258   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:01.556278   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-681605 minikube.k8s.io/updated_at=2024_10_07T10_23_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=addons-681605 minikube.k8s.io/primary=true
	I1007 10:23:01.579297   11818 ops.go:34] apiserver oom_adj: -16
	I1007 10:23:01.693108   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:02.193713   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:02.693857   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:03.193876   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:03.693791   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:04.193848   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:04.693482   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:05.194078   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:05.277617   11818 kubeadm.go:1113] duration metric: took 3.721448408s to wait for elevateKubeSystemPrivileges
	I1007 10:23:05.277648   11818 kubeadm.go:394] duration metric: took 14.717774013s to StartCluster
	I1007 10:23:05.277663   11818 settings.go:142] acquiring lock: {Name:mk699f217216dbe513edf6a42c79fe85f8c20124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:23:05.277785   11818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:23:05.278239   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/kubeconfig: {Name:mkc8a5ce1dbafe55e056433fff5c065506f83346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:23:05.278460   11818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 10:23:05.278485   11818 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 10:23:05.278470   11818 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:23:05.278606   11818 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-681605"
	I1007 10:23:05.278619   11818 addons.go:69] Setting cloud-spanner=true in profile "addons-681605"
	I1007 10:23:05.278634   11818 addons.go:234] Setting addon cloud-spanner=true in "addons-681605"
	I1007 10:23:05.278600   11818 addons.go:69] Setting inspektor-gadget=true in profile "addons-681605"
	I1007 10:23:05.278657   11818 addons.go:234] Setting addon inspektor-gadget=true in "addons-681605"
	I1007 10:23:05.278665   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.278689   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.278695   11818 config.go:182] Loaded profile config "addons-681605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:23:05.278696   11818 addons.go:69] Setting gcp-auth=true in profile "addons-681605"
	I1007 10:23:05.278709   11818 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-681605"
	I1007 10:23:05.278743   11818 mustload.go:65] Loading cluster: addons-681605
	I1007 10:23:05.278753   11818 addons.go:69] Setting registry=true in profile "addons-681605"
	I1007 10:23:05.278744   11818 addons.go:69] Setting ingress=true in profile "addons-681605"
	I1007 10:23:05.278774   11818 addons.go:69] Setting storage-provisioner=true in profile "addons-681605"
	I1007 10:23:05.278791   11818 addons.go:234] Setting addon storage-provisioner=true in "addons-681605"
	I1007 10:23:05.278796   11818 addons.go:234] Setting addon ingress=true in "addons-681605"
	I1007 10:23:05.278634   11818 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-681605"
	I1007 10:23:05.278806   11818 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-681605"
	I1007 10:23:05.278823   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.278826   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.278743   11818 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-681605"
	I1007 10:23:05.279237   11818 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-681605"
	I1007 10:23:05.279285   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.278766   11818 addons.go:234] Setting addon registry=true in "addons-681605"
	I1007 10:23:05.279374   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.278588   11818 addons.go:69] Setting yakd=true in profile "addons-681605"
	I1007 10:23:05.278831   11818 addons.go:69] Setting volumesnapshots=true in profile "addons-681605"
	I1007 10:23:05.279565   11818 addons.go:234] Setting addon yakd=true in "addons-681605"
	I1007 10:23:05.279599   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.279602   11818 config.go:182] Loaded profile config "addons-681605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:23:05.279615   11818 addons.go:234] Setting addon volumesnapshots=true in "addons-681605"
	I1007 10:23:05.279652   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.279868   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.279909   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.279904   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.279949   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.279144   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.280063   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.280085   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.280338   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.280374   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.279159   11818 addons.go:69] Setting ingress-dns=true in profile "addons-681605"
	I1007 10:23:05.280461   11818 addons.go:234] Setting addon ingress-dns=true in "addons-681605"
	I1007 10:23:05.279431   11818 addons.go:69] Setting volcano=true in profile "addons-681605"
	I1007 10:23:05.280468   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.279194   11818 addons.go:69] Setting default-storageclass=true in profile "addons-681605"
	I1007 10:23:05.280495   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.280490   11818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-681605"
	I1007 10:23:05.280494   11818 addons.go:234] Setting addon volcano=true in "addons-681605"
	I1007 10:23:05.280533   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.280584   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.278598   11818 addons.go:69] Setting metrics-server=true in profile "addons-681605"
	I1007 10:23:05.280612   11818 addons.go:234] Setting addon metrics-server=true in "addons-681605"
	I1007 10:23:05.280637   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.280654   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.280661   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.280661   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.280741   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.280910   11818 out.go:177] * Verifying Kubernetes components...
	I1007 10:23:05.281102   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.281128   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.281145   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.281171   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.281199   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.281200   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.281208   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.281132   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.281753   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.282141   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.282372   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.282403   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.282615   11818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:23:05.298593   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36657
	I1007 10:23:05.299068   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I1007 10:23:05.299272   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.299467   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.299783   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.299818   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.300078   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.300096   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.300183   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.300583   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.300793   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.300806   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.300835   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.300974   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I1007 10:23:05.302490   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.308519   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.308585   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.308881   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.308924   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.310616   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.310655   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.328995   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.329180   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I1007 10:23:05.329790   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.329834   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.330258   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.330988   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.331031   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.331563   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38179
	I1007 10:23:05.331953   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I1007 10:23:05.332194   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.332383   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.332653   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.332671   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.333086   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.333335   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.333354   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.333452   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.333698   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.333765   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.334903   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.334927   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.335276   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.335456   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.337420   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I1007 10:23:05.338260   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.338302   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.340173   11818 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-681605"
	I1007 10:23:05.340226   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.340606   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.340627   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.341776   11818 addons.go:234] Setting addon default-storageclass=true in "addons-681605"
	I1007 10:23:05.341817   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.342213   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.342232   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.342548   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.348181   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.348217   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.348785   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.349604   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.349688   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.350070   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
	I1007 10:23:05.350691   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.351469   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.351488   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.351836   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.352402   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.352440   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.359200   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I1007 10:23:05.359963   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.360581   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.360601   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.360955   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.361498   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.361535   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.361767   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34075
	I1007 10:23:05.362738   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.363380   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.363398   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.363783   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.364381   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.364420   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.364689   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I1007 10:23:05.365381   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.365959   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.365978   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.366441   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.367080   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.367118   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.370149   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I1007 10:23:05.371202   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.371763   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.371782   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.372198   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.372405   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.374128   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I1007 10:23:05.374340   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.376347   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.376514   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 10:23:05.377182   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.377201   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.378995   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.379030   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 10:23:05.380248   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 10:23:05.381879   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 10:23:05.383114   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 10:23:05.384242   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.384298   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.384519   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43793
	I1007 10:23:05.385063   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.385163   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I1007 10:23:05.385414   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 10:23:05.385491   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.385897   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
	I1007 10:23:05.385962   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.385978   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.386108   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.386125   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.386456   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.386474   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.386678   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.386771   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.386906   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.386926   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.387369   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.387408   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.387632   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.387755   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 10:23:05.388214   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.388275   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.390536   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1007 10:23:05.390538   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35523
	I1007 10:23:05.391087   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.391755   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 10:23:05.391780   11818 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 10:23:05.391802   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.392124   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.392143   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.392615   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.392819   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.394962   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I1007 10:23:05.395618   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.395716   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.396444   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39949
	I1007 10:23:05.396807   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.396579   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.397026   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.397043   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.397990   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36059
	I1007 10:23:05.397995   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.398173   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.398304   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.398822   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.398839   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.399231   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.399783   11818 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 10:23:05.400036   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44311
	I1007 10:23:05.402708   11818 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 10:23:05.404078   11818 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 10:23:05.404099   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 10:23:05.404123   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.407057   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I1007 10:23:05.407649   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.407736   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.408103   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I1007 10:23:05.408411   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.408428   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.408949   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.409029   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.409046   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.409589   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.409634   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.409798   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.409900   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.409941   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34287
	I1007 10:23:05.410093   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34467
	I1007 10:23:05.413956   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43849
	I1007 10:23:05.414146   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.414161   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.414213   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.414741   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
	I1007 10:23:05.416190   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.416234   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.416381   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.416465   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.416670   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.416709   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.416898   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.417031   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.417076   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.417089   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.417380   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.417395   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.417472   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.417613   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.417623   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.417680   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.418135   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.418150   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.418168   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.418141   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.418251   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.418292   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.418306   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.418318   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.418335   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.418909   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.418925   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.418976   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.419087   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.419113   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.419122   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.419221   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.419240   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.419271   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.419446   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.419877   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.419907   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.420294   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.420303   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.420484   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.421705   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.421932   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.422931   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.423909   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.424051   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.424284   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:05.424812   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:05.425213   11818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 10:23:05.425575   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.425637   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:05.425656   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:05.425663   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:05.425671   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:05.425677   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:05.425773   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.425871   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.426162   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:05.426182   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	W1007 10:23:05.426245   11818 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 10:23:05.427404   11818 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 10:23:05.427531   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 10:23:05.427636   11818 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 10:23:05.427735   11818 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 10:23:05.428830   11818 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 10:23:05.428845   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 10:23:05.428864   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.428947   11818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 10:23:05.428995   11818 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 10:23:05.429186   11818 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 10:23:05.429205   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.429600   11818 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 10:23:05.429613   11818 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 10:23:05.429629   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.430449   11818 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 10:23:05.430465   11818 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 10:23:05.430483   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.430609   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38211
	I1007 10:23:05.431168   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.431371   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.431712   11818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 10:23:05.431802   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I1007 10:23:05.432096   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.432110   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.432497   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.432564   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.432751   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.432811   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.433152   11818 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 10:23:05.433170   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 10:23:05.433187   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.433341   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.433359   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.433375   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.433390   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.433791   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.433972   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.434044   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.434280   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.434434   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.434724   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.434785   11818 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 10:23:05.434964   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.434990   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.435124   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.435143   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.435330   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.435389   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.435497   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.435648   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.435904   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.435922   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.435952   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.436095   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.436256   11818 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 10:23:05.436272   11818 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 10:23:05.436287   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.436329   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.437357   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.438352   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.439444   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.439464   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.439502   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.439692   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.439913   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.440075   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.440340   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.441464   11818 out.go:177]   - Using image docker.io/busybox:stable
	I1007 10:23:05.441472   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.441496   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.441702   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.441767   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.441899   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.441920   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.442135   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.442180   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.442293   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.442362   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.442395   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.442526   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.442711   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.442966   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.443126   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.444301   11818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 10:23:05.444390   11818 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 10:23:05.446092   11818 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 10:23:05.446114   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 10:23:05.446137   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.446241   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I1007 10:23:05.446264   11818 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:23:05.446272   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 10:23:05.446283   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.446843   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.449147   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.449163   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.449554   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.449716   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.449755   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.450186   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.450195   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.450219   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.450403   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.450439   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.450462   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.450633   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.450678   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.450960   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.451003   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.451293   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.451587   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.451830   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.452104   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.454061   11818 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 10:23:05.454232   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I1007 10:23:05.454681   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.455169   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.455191   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.455684   11818 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 10:23:05.455702   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 10:23:05.455720   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.455815   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.455971   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.461669   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.461893   11818 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 10:23:05.461907   11818 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 10:23:05.461922   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.462722   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.463111   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.463129   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.463252   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.463383   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.463681   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.463799   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.464696   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
	I1007 10:23:05.464977   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.465118   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.465359   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.465383   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.465549   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.465705   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.465715   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.465774   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.465891   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.465971   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.466026   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.466275   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	W1007 10:23:05.466791   11818 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55582->192.168.39.71:22: read: connection reset by peer
	I1007 10:23:05.466815   11818 retry.go:31] will retry after 304.283437ms: ssh: handshake failed: read tcp 192.168.39.1:55582->192.168.39.71:22: read: connection reset by peer
	I1007 10:23:05.467610   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.469821   11818 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 10:23:05.471324   11818 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 10:23:05.471342   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 10:23:05.471361   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.474286   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.474752   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.474770   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.474941   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.475121   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.475258   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.475405   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	W1007 10:23:05.484193   11818 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55594->192.168.39.71:22: read: connection reset by peer
	I1007 10:23:05.484230   11818 retry.go:31] will retry after 231.103974ms: ssh: handshake failed: read tcp 192.168.39.1:55594->192.168.39.71:22: read: connection reset by peer
	I1007 10:23:05.696750   11818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:23:05.697236   11818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 10:23:05.822097   11818 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 10:23:05.822122   11818 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 10:23:05.835174   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 10:23:05.852988   11818 node_ready.go:35] waiting up to 6m0s for node "addons-681605" to be "Ready" ...
	I1007 10:23:05.869222   11818 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 10:23:05.869255   11818 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 10:23:05.877111   11818 node_ready.go:49] node "addons-681605" has status "Ready":"True"
	I1007 10:23:05.877143   11818 node_ready.go:38] duration metric: took 24.124157ms for node "addons-681605" to be "Ready" ...
	I1007 10:23:05.877156   11818 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:23:05.901571   11818 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59fw7" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:05.980924   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:23:06.005004   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 10:23:06.015870   11818 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 10:23:06.015890   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 10:23:06.018263   11818 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 10:23:06.018286   11818 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 10:23:06.019663   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 10:23:06.024066   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 10:23:06.063251   11818 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 10:23:06.063279   11818 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 10:23:06.079842   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 10:23:06.079867   11818 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 10:23:06.136943   11818 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 10:23:06.136966   11818 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 10:23:06.148171   11818 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 10:23:06.148191   11818 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 10:23:06.172027   11818 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 10:23:06.172053   11818 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 10:23:06.174059   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 10:23:06.175666   11818 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 10:23:06.175689   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 10:23:06.242171   11818 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 10:23:06.242200   11818 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 10:23:06.258124   11818 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 10:23:06.258148   11818 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 10:23:06.270416   11818 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 10:23:06.270438   11818 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 10:23:06.298648   11818 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 10:23:06.298675   11818 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 10:23:06.304149   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 10:23:06.304173   11818 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 10:23:06.357129   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 10:23:06.358451   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 10:23:06.453317   11818 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 10:23:06.453345   11818 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 10:23:06.456524   11818 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 10:23:06.456550   11818 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 10:23:06.468963   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 10:23:06.495116   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 10:23:06.495146   11818 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 10:23:06.506860   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 10:23:06.506889   11818 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 10:23:06.670359   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 10:23:06.670389   11818 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 10:23:06.679950   11818 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 10:23:06.679998   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 10:23:06.684830   11818 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 10:23:06.684856   11818 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 10:23:06.715425   11818 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 10:23:06.715449   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 10:23:06.818008   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 10:23:06.818038   11818 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 10:23:06.856815   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 10:23:06.882407   11818 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 10:23:06.882448   11818 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 10:23:06.937670   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 10:23:07.095664   11818 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 10:23:07.095687   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 10:23:07.140207   11818 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 10:23:07.140236   11818 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 10:23:07.356350   11818 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 10:23:07.356379   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 10:23:07.379305   11818 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 10:23:07.379332   11818 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 10:23:07.522629   11818 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.82534643s)
	I1007 10:23:07.522666   11818 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 10:23:07.531379   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.696162237s)
	I1007 10:23:07.531453   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:07.531468   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:07.531797   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:07.531833   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:07.531848   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:07.531866   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:07.531874   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:07.532116   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:07.532133   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:07.532137   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:07.652094   11818 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 10:23:07.652120   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 10:23:07.657562   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 10:23:07.907915   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-59fw7" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:08.042230   11818 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-681605" context rescaled to 1 replicas
	I1007 10:23:08.053863   11818 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 10:23:08.053892   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 10:23:08.345320   11818 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 10:23:08.345348   11818 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 10:23:08.647604   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 10:23:09.628987   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.648023433s)
	I1007 10:23:09.629051   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:09.629067   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:09.629374   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:09.629397   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:09.629411   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:09.629419   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:09.629736   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:09.629760   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:09.910718   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-59fw7" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:12.489854   11818 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 10:23:12.489897   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:12.492796   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:12.493234   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:12.493262   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:12.493414   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:12.493616   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:12.493759   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:12.493895   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:12.510070   11818 pod_ready.go:93] pod "coredns-7c65d6cfc9-59fw7" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:12.510095   11818 pod_ready.go:82] duration metric: took 6.608495564s for pod "coredns-7c65d6cfc9-59fw7" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:12.510107   11818 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:12.841571   11818 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 10:23:12.968886   11818 addons.go:234] Setting addon gcp-auth=true in "addons-681605"
	I1007 10:23:12.968933   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:12.969328   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:12.969363   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:12.985171   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I1007 10:23:12.985610   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:12.986085   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:12.986103   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:12.986475   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:12.987066   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:12.987100   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:13.002726   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I1007 10:23:13.003191   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:13.003681   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:13.003707   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:13.004099   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:13.004287   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:13.005718   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:13.005996   11818 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 10:23:13.006023   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:13.008789   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:13.009161   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:13.009188   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:13.009350   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:13.009502   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:13.009623   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:13.009774   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:14.127201   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.122158292s)
	I1007 10:23:14.127278   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127280   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.107593349s)
	I1007 10:23:14.127301   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127312   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127304   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127373   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.103276251s)
	I1007 10:23:14.127411   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127427   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127434   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.953348872s)
	I1007 10:23:14.127463   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127475   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127520   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.770363402s)
	I1007 10:23:14.127547   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127558   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127619   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.769145292s)
	I1007 10:23:14.127640   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127647   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127742   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.658750291s)
	I1007 10:23:14.127758   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127782   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127816   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.270967408s)
	I1007 10:23:14.127843   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127855   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127919   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.190221576s)
	W1007 10:23:14.127941   11818 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 10:23:14.127958   11818 retry.go:31] will retry after 158.71667ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 10:23:14.128056   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.47046227s)
	I1007 10:23:14.128072   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.128080   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.128544   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.128554   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.128567   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.128573   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.128575   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.128580   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.128583   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.128588   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.128594   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.128914   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.128927   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.128935   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.128952   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.128999   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129019   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129024   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129030   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.129036   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.129214   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129237   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129267   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129275   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.129281   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.129320   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129343   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129348   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129390   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.129396   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.129646   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129664   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129675   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129682   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129691   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129698   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129705   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.129712   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.129755   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129774   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129780   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129788   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.129794   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.129805   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129821   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129848   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129854   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129863   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129872   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129880   11818 addons.go:475] Verifying addon metrics-server=true in "addons-681605"
	I1007 10:23:14.129914   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129933   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129940   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129946   11818 addons.go:475] Verifying addon ingress=true in "addons-681605"
	I1007 10:23:14.130151   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.130174   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.130180   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.131189   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.131242   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.131406   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.131447   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.131484   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.131524   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.131752   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.131767   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.131827   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.131849   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.131855   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.132702   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.132722   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.132730   11818 addons.go:475] Verifying addon registry=true in "addons-681605"
	I1007 10:23:14.133706   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.133730   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.134044   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.134498   11818 out.go:177] * Verifying ingress addon...
	I1007 10:23:14.136306   11818 out.go:177] * Verifying registry addon...
	I1007 10:23:14.136306   11818 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-681605 service yakd-dashboard -n yakd-dashboard
	
	I1007 10:23:14.136945   11818 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 10:23:14.138164   11818 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 10:23:14.153590   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.153610   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.153861   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.153878   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	W1007 10:23:14.153974   11818 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1007 10:23:14.155090   11818 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 10:23:14.155109   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:14.156392   11818 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 10:23:14.156407   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:14.159567   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.159583   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.159818   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.159843   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.159852   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.287814   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 10:23:14.515972   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:14.644858   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:14.645456   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:15.143280   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:15.242818   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:15.669468   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:15.670963   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.023318912s)
	I1007 10:23:15.671001   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:15.671021   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:15.671042   11818 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.665021816s)
	I1007 10:23:15.671270   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:15.671279   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:15.671295   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:15.671317   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:15.671329   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:15.671525   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:15.671538   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:15.671546   11818 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-681605"
	I1007 10:23:15.673267   11818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 10:23:15.674601   11818 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 10:23:15.675880   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:15.676213   11818 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 10:23:15.677147   11818 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 10:23:15.677333   11818 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 10:23:15.677348   11818 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 10:23:15.725075   11818 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 10:23:15.725114   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:15.753671   11818 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 10:23:15.753701   11818 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 10:23:15.873742   11818 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 10:23:15.873771   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 10:23:15.984058   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 10:23:16.142556   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:16.143054   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:16.181987   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:16.517735   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:16.642700   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:16.644830   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:16.682368   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:16.815360   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.52749728s)
	I1007 10:23:16.815421   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:16.815439   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:16.815728   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:16.815769   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:16.815727   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:16.815781   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:16.815874   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:16.816063   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:16.816065   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:16.816114   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:17.148120   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:17.148752   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:17.251345   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:17.375177   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.391071612s)
	I1007 10:23:17.375251   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:17.375273   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:17.375568   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:17.375592   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:17.375607   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:17.375618   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:17.375819   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:17.375836   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:17.375835   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:17.377716   11818 addons.go:475] Verifying addon gcp-auth=true in "addons-681605"
	I1007 10:23:17.379534   11818 out.go:177] * Verifying gcp-auth addon...
	I1007 10:23:17.381528   11818 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 10:23:17.430897   11818 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 10:23:17.430917   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:17.642784   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:17.642829   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:17.683115   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:17.886116   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:18.147327   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:18.147536   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:18.184808   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:18.387001   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:18.520419   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:18.643759   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:18.644075   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:18.681453   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:18.884530   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:19.142623   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:19.143559   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:19.182771   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:19.385165   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:19.642139   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:19.642164   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:19.682005   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:19.886167   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:20.142997   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:20.143104   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:20.181553   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:20.384904   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:20.642329   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:20.643074   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:20.682789   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:20.885857   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:21.018052   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:21.142420   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:21.142812   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:21.182775   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:21.385350   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:21.640903   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:21.641886   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:21.681498   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:21.885736   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:22.142253   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:22.142729   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:22.182293   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:22.386202   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:22.642385   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:22.643042   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:22.681963   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:22.885666   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:23.142423   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:23.142436   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:23.182345   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:23.385318   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:23.517634   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:23.644452   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:23.646085   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:23.682322   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:23.884780   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:24.141873   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:24.142168   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:24.181798   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:24.386060   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:24.641483   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:24.642963   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:24.682125   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:24.889277   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:25.143396   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:25.143601   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:25.182085   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:25.385464   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:25.640805   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:25.641874   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:25.681927   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:25.886914   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:26.016767   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:26.215536   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:26.216713   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:26.217083   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:26.385553   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:26.641375   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:26.641978   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:26.681796   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:26.885090   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:27.142297   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:27.143703   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:27.182643   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:27.385729   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:27.642468   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:27.642772   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:27.684354   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:27.885274   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:28.017706   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:28.141493   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:28.142190   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:28.182366   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:28.385267   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:28.642666   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:28.642774   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:28.681948   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:28.885491   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:29.140920   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:29.142180   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:29.182128   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:29.386708   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:29.641761   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:29.642695   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:29.682798   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:29.885081   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:30.140678   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:30.142532   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:30.182036   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:30.638434   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:30.642821   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:30.645103   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:30.647771   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:30.682215   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:30.886272   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:31.141793   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:31.142751   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:31.182194   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:31.386284   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:31.642183   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:31.643265   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:31.682033   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:31.885614   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:32.141947   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:32.142366   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:32.182150   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:32.386247   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:32.643336   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:32.645576   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:32.683112   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:32.885770   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:33.016463   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:33.141552   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:33.141848   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:33.181715   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:33.384458   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:33.641816   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:33.642600   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:33.681764   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:33.886028   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:34.142184   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:34.143160   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:34.182320   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:34.385629   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:34.640579   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:34.642581   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:34.681814   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:34.885758   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:35.142381   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:35.142773   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:35.183188   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:35.387334   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:35.517119   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:35.642177   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:35.642745   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:35.681958   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:35.885966   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:36.141279   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:36.142205   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:36.182081   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:36.385595   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:36.641171   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:36.642056   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:36.682689   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:36.885458   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:37.141655   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:37.141813   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:37.181278   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:37.384649   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:37.517799   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:37.642677   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:37.643075   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:37.683015   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:37.885389   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:38.141126   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:38.141719   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:38.181340   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:38.384516   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:38.641279   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:38.644121   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:38.682055   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:38.886437   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:39.141714   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:39.142113   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:39.182314   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:39.385826   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:39.642837   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:39.642973   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:39.681960   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:39.886040   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:40.016370   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:40.142465   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:40.142850   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:40.181079   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:40.385758   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:40.641990   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:40.642398   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:40.682214   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:40.886117   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:41.145828   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:41.147407   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:41.182601   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:41.386623   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:41.650673   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:41.651265   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:41.752547   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:41.885618   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:42.142009   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:42.142553   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:42.181895   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:42.385206   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:42.520396   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:42.641347   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:42.643769   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:42.681744   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:42.885734   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:43.142976   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:43.143017   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:43.182541   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:43.386615   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:43.642749   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:43.642857   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:43.681540   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:43.885062   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:44.142054   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:44.142738   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:44.181646   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:44.390514   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:44.641319   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:44.642204   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:44.681959   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:44.886125   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:45.017289   11818 pod_ready.go:93] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:45.017318   11818 pod_ready.go:82] duration metric: took 32.507202793s for pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.017330   11818 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.022011   11818 pod_ready.go:93] pod "etcd-addons-681605" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:45.022038   11818 pod_ready.go:82] duration metric: took 4.700937ms for pod "etcd-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.022052   11818 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.027119   11818 pod_ready.go:93] pod "kube-apiserver-addons-681605" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:45.027149   11818 pod_ready.go:82] duration metric: took 5.088063ms for pod "kube-apiserver-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.027160   11818 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.031962   11818 pod_ready.go:93] pod "kube-controller-manager-addons-681605" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:45.032007   11818 pod_ready.go:82] duration metric: took 4.837357ms for pod "kube-controller-manager-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.032020   11818 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4rgzz" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.037845   11818 pod_ready.go:93] pod "kube-proxy-4rgzz" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:45.037868   11818 pod_ready.go:82] duration metric: took 5.841055ms for pod "kube-proxy-4rgzz" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.037876   11818 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.143001   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:45.143362   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:45.184384   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:45.550610   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:45.551012   11818 pod_ready.go:93] pod "kube-scheduler-addons-681605" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:45.551032   11818 pod_ready.go:82] duration metric: took 513.14799ms for pod "kube-scheduler-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.551042   11818 pod_ready.go:39] duration metric: took 39.673875264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:23:45.551062   11818 api_server.go:52] waiting for apiserver process to appear ...
	I1007 10:23:45.551125   11818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 10:23:45.574452   11818 api_server.go:72] duration metric: took 40.295870948s to wait for apiserver process to appear ...
	I1007 10:23:45.574478   11818 api_server.go:88] waiting for apiserver healthz status ...
	I1007 10:23:45.574498   11818 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1007 10:23:45.579296   11818 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1007 10:23:45.580699   11818 api_server.go:141] control plane version: v1.31.1
	I1007 10:23:45.580727   11818 api_server.go:131] duration metric: took 6.241356ms to wait for apiserver health ...
	I1007 10:23:45.580736   11818 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 10:23:45.619754   11818 system_pods.go:59] 17 kube-system pods found
	I1007 10:23:45.619785   11818 system_pods.go:61] "coredns-7c65d6cfc9-9wqp6" [aab4529c-a075-4383-b45b-c26fa0aafe31] Running
	I1007 10:23:45.619792   11818 system_pods.go:61] "csi-hostpath-attacher-0" [6a722b86-0d68-4a92-84a4-a1db2bff5162] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1007 10:23:45.619800   11818 system_pods.go:61] "csi-hostpath-resizer-0" [5fedba4d-1b30-4b3a-904c-6e10b5381894] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1007 10:23:45.619807   11818 system_pods.go:61] "csi-hostpathplugin-ckx6s" [ef8f4f3f-592d-44e2-aa3f-3b372f01185d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1007 10:23:45.619811   11818 system_pods.go:61] "etcd-addons-681605" [d5a3f208-5b5a-4a86-89b9-8f30e1b08fff] Running
	I1007 10:23:45.619815   11818 system_pods.go:61] "kube-apiserver-addons-681605" [5739ca02-93c2-4efc-b639-906fdcb4c6b9] Running
	I1007 10:23:45.619818   11818 system_pods.go:61] "kube-controller-manager-addons-681605" [37f7ee25-9813-4354-bb91-288c87feaa2e] Running
	I1007 10:23:45.619823   11818 system_pods.go:61] "kube-ingress-dns-minikube" [e17c292c-1ebb-47e8-9d91-4a32661ea133] Running
	I1007 10:23:45.619826   11818 system_pods.go:61] "kube-proxy-4rgzz" [dfdd32a0-cf41-4a5e-ac6f-ccadb50f64a7] Running
	I1007 10:23:45.619830   11818 system_pods.go:61] "kube-scheduler-addons-681605" [744a8102-3a53-4e53-9770-95bf8e08d7c5] Running
	I1007 10:23:45.619835   11818 system_pods.go:61] "metrics-server-84c5f94fbc-z5fpj" [3b2974fc-b174-48a3-b7ed-5e1ae0743bb4] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 10:23:45.619839   11818 system_pods.go:61] "nvidia-device-plugin-daemonset-5qr65" [50ebff62-241e-44a1-a190-cbc7791e17c6] Running
	I1007 10:23:45.619848   11818 system_pods.go:61] "registry-66c9cd494c-j5b9g" [16a6aecf-e13b-4534-83e7-70fdf57bd954] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1007 10:23:45.619855   11818 system_pods.go:61] "registry-proxy-tr9b7" [2c257dda-ca4a-4383-904e-6a600fa871bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1007 10:23:45.619863   11818 system_pods.go:61] "snapshot-controller-56fcc65765-68xj2" [ee9f6a14-fe4d-479b-8bbd-cc70f937e384] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 10:23:45.619872   11818 system_pods.go:61] "snapshot-controller-56fcc65765-jx5xc" [0dd2b0be-649e-4fca-9448-171e927c841c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 10:23:45.619875   11818 system_pods.go:61] "storage-provisioner" [be6826cc-4ed3-43a6-9da7-09ba7c596ecf] Running
	I1007 10:23:45.619881   11818 system_pods.go:74] duration metric: took 39.140308ms to wait for pod list to return data ...
	I1007 10:23:45.619888   11818 default_sa.go:34] waiting for default service account to be created ...
	I1007 10:23:45.651139   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:45.653483   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:45.686408   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:45.814844   11818 default_sa.go:45] found service account: "default"
	I1007 10:23:45.814869   11818 default_sa.go:55] duration metric: took 194.974633ms for default service account to be created ...
	I1007 10:23:45.814877   11818 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 10:23:45.885089   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:46.021093   11818 system_pods.go:86] 17 kube-system pods found
	I1007 10:23:46.021120   11818 system_pods.go:89] "coredns-7c65d6cfc9-9wqp6" [aab4529c-a075-4383-b45b-c26fa0aafe31] Running
	I1007 10:23:46.021128   11818 system_pods.go:89] "csi-hostpath-attacher-0" [6a722b86-0d68-4a92-84a4-a1db2bff5162] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1007 10:23:46.021135   11818 system_pods.go:89] "csi-hostpath-resizer-0" [5fedba4d-1b30-4b3a-904c-6e10b5381894] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1007 10:23:46.021142   11818 system_pods.go:89] "csi-hostpathplugin-ckx6s" [ef8f4f3f-592d-44e2-aa3f-3b372f01185d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1007 10:23:46.021146   11818 system_pods.go:89] "etcd-addons-681605" [d5a3f208-5b5a-4a86-89b9-8f30e1b08fff] Running
	I1007 10:23:46.021150   11818 system_pods.go:89] "kube-apiserver-addons-681605" [5739ca02-93c2-4efc-b639-906fdcb4c6b9] Running
	I1007 10:23:46.021154   11818 system_pods.go:89] "kube-controller-manager-addons-681605" [37f7ee25-9813-4354-bb91-288c87feaa2e] Running
	I1007 10:23:46.021158   11818 system_pods.go:89] "kube-ingress-dns-minikube" [e17c292c-1ebb-47e8-9d91-4a32661ea133] Running
	I1007 10:23:46.021161   11818 system_pods.go:89] "kube-proxy-4rgzz" [dfdd32a0-cf41-4a5e-ac6f-ccadb50f64a7] Running
	I1007 10:23:46.021164   11818 system_pods.go:89] "kube-scheduler-addons-681605" [744a8102-3a53-4e53-9770-95bf8e08d7c5] Running
	I1007 10:23:46.021169   11818 system_pods.go:89] "metrics-server-84c5f94fbc-z5fpj" [3b2974fc-b174-48a3-b7ed-5e1ae0743bb4] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 10:23:46.021174   11818 system_pods.go:89] "nvidia-device-plugin-daemonset-5qr65" [50ebff62-241e-44a1-a190-cbc7791e17c6] Running
	I1007 10:23:46.021182   11818 system_pods.go:89] "registry-66c9cd494c-j5b9g" [16a6aecf-e13b-4534-83e7-70fdf57bd954] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1007 10:23:46.021187   11818 system_pods.go:89] "registry-proxy-tr9b7" [2c257dda-ca4a-4383-904e-6a600fa871bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1007 10:23:46.021194   11818 system_pods.go:89] "snapshot-controller-56fcc65765-68xj2" [ee9f6a14-fe4d-479b-8bbd-cc70f937e384] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 10:23:46.021204   11818 system_pods.go:89] "snapshot-controller-56fcc65765-jx5xc" [0dd2b0be-649e-4fca-9448-171e927c841c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 10:23:46.021210   11818 system_pods.go:89] "storage-provisioner" [be6826cc-4ed3-43a6-9da7-09ba7c596ecf] Running
	I1007 10:23:46.021217   11818 system_pods.go:126] duration metric: took 206.33548ms to wait for k8s-apps to be running ...
	I1007 10:23:46.021225   11818 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 10:23:46.021265   11818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:23:46.066314   11818 system_svc.go:56] duration metric: took 45.07976ms WaitForService to wait for kubelet
	I1007 10:23:46.066339   11818 kubeadm.go:582] duration metric: took 40.787761566s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:23:46.066358   11818 node_conditions.go:102] verifying NodePressure condition ...
	I1007 10:23:46.141613   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:46.142248   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:46.182504   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:46.215223   11818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:23:46.215253   11818 node_conditions.go:123] node cpu capacity is 2
	I1007 10:23:46.215264   11818 node_conditions.go:105] duration metric: took 148.901881ms to run NodePressure ...
	I1007 10:23:46.215275   11818 start.go:241] waiting for startup goroutines ...
	I1007 10:23:46.386290   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:46.641642   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:46.641798   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:46.682782   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:46.885741   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:47.141898   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:47.142330   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:47.182035   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:47.385602   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:47.642686   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:47.643013   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:47.681204   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:47.885844   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:48.142148   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:48.142606   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:48.181450   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:48.385823   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:48.642441   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:48.642973   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:48.684325   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:48.885834   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:49.141944   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:49.144062   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:49.181676   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:49.385285   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:49.644109   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:49.644359   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:49.682403   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:49.885653   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:50.142167   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:50.143378   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:50.181934   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:50.387852   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:50.641575   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:50.643931   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:50.682043   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:50.955370   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:51.199294   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:51.199386   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:51.199678   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:51.385181   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:51.641371   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:51.641553   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:51.681296   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:51.885442   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:52.142942   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:52.143540   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:52.182446   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:52.385659   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:52.642174   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:52.643177   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:52.681826   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:52.885277   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:53.140898   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:53.141635   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:53.181520   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:53.385796   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:53.643302   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:53.643683   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:53.682723   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:53.885652   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:54.142017   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:54.142432   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:54.182521   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:54.384775   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:54.642218   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:54.643212   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:54.682639   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:54.885512   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:55.141958   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:55.142079   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:55.191577   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:55.385407   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:55.642172   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:55.642728   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:55.682467   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:55.886195   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:56.141256   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:56.142156   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:56.182053   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:56.388235   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:56.642036   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:56.642363   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:56.682602   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:56.885469   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:57.409842   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:57.410294   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:57.410920   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:57.411142   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:57.642459   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:57.642578   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:57.683180   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:57.886391   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:58.141439   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:58.141755   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:58.181137   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:58.389527   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:58.643022   11818 kapi.go:107] duration metric: took 44.504855053s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 10:23:58.643648   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:58.682370   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:58.885830   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:59.142455   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:59.183672   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:59.385976   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:59.642274   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:59.681600   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:59.885487   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:00.142042   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:00.182673   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:00.385248   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:00.642553   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:00.682802   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:00.884729   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:01.142407   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:01.181776   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:01.384652   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:01.642475   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:01.682262   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:01.885596   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:02.141495   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:02.182160   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:02.385858   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:02.642104   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:02.681540   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:02.885896   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:03.143203   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:03.181611   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:03.386928   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:03.646148   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:03.683011   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:03.885778   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:04.141515   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:04.182202   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:04.385721   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:04.641649   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:04.681698   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:04.884816   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:05.142090   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:05.181549   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:05.384481   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:05.642000   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:05.681171   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:05.885936   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:06.141880   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:06.183444   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:06.386290   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:06.641988   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:06.682244   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:06.885231   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:07.186517   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:07.187199   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:07.386047   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:07.642068   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:07.681525   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:07.885928   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:08.146111   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:08.247082   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:08.385761   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:08.657778   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:08.682194   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:08.885286   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:09.140723   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:09.182469   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:09.387743   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:09.641672   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:09.682177   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:09.977414   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:10.146762   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:10.194396   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:10.386102   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:10.647850   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:10.684413   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:10.887634   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:11.141464   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:11.182322   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:11.385349   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:11.641680   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:11.682454   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:11.886441   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:12.141717   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:12.182133   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:12.386602   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:12.644260   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:12.688154   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:12.890761   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:13.142384   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:13.181754   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:13.384843   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:13.642440   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:13.682243   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:13.886070   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:14.141223   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:14.181866   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:14.385258   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:14.641055   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:14.681372   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:14.884661   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:15.141622   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:15.182258   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:15.386002   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:15.643004   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:15.682097   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:15.891659   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:16.141898   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:16.180900   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:16.385613   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:16.641856   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:16.681229   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:16.885245   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:17.141108   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:17.181760   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:17.385865   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:17.641619   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:17.681994   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:17.951684   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:18.141898   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:18.182985   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:18.385391   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:18.642173   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:18.693371   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:18.888505   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:19.141865   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:19.182196   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:19.386199   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:19.640767   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:19.686134   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:19.886336   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:20.142001   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:20.183357   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:20.385673   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:20.641018   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:20.682323   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:20.885387   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:21.141229   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:21.182571   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:21.384907   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:21.642978   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:21.681532   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:21.885211   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:22.141370   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:22.182486   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:22.385142   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:22.642992   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:22.682112   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:23.338431   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:23.338891   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:23.340879   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:23.437948   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:23.656162   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:23.684080   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:23.885482   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:24.141710   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:24.184755   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:24.385420   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:24.641877   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:24.681988   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:24.885583   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:25.143976   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:25.182651   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:25.441714   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:25.641956   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:25.742841   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:25.885630   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:26.141679   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:26.182576   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:26.389724   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:26.642108   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:26.682921   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:26.903339   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:27.146594   11818 kapi.go:107] duration metric: took 1m13.009643159s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 10:24:27.186265   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:27.386598   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:27.681847   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:27.884987   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:28.182191   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:28.385934   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:28.682183   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:28.885119   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:29.182784   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:29.386059   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:29.683579   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:29.885799   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:30.181544   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:30.386059   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:30.681816   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:30.885334   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:31.182616   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:31.385209   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:31.682597   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:31.885969   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:32.182120   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:32.385909   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:32.682124   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:32.885514   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:33.181888   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:33.385849   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:33.682471   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:33.885847   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:34.183481   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:34.385816   11818 kapi.go:107] duration metric: took 1m17.004285943s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 10:24:34.387530   11818 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-681605 cluster.
	I1007 10:24:34.388910   11818 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 10:24:34.390138   11818 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 10:24:34.682378   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:35.182982   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:35.682626   11818 kapi.go:107] duration metric: took 1m20.005480924s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 10:24:35.684541   11818 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, cloud-spanner, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1007 10:24:35.685962   11818 addons.go:510] duration metric: took 1m30.407473402s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns metrics-server cloud-spanner inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1007 10:24:35.686000   11818 start.go:246] waiting for cluster config update ...
	I1007 10:24:35.686015   11818 start.go:255] writing updated cluster config ...
	I1007 10:24:35.686305   11818 ssh_runner.go:195] Run: rm -f paused
	I1007 10:24:35.738086   11818 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 10:24:35.740209   11818 out.go:177] * Done! kubectl is now configured to use "addons-681605" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.300628589Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297337300597043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574201,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a08a1a4-d900-42c1-ac2a-dd90e5f8ba73 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.301406466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d69784ae-dbaa-4735-b412-95e43013956f name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.301525498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d69784ae-dbaa-4735-b412-95e43013956f name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.302421297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04f28c679a3bb2acdb906c94443c0e8d1f68ba40a8476222c4fb9e17688ecf5d,PodSandboxId:b85936ceb669731e4a89ddd723f8ae1030e0398fbbc75e21b17f9f42dc58f149,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728297314430053246,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 418202f3-6a6f-41d5-bdd6-50b1f855a708,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d23e2bfca558d2e1f2c13dcfe86870650ccc6bfd84b66b7306113b89fae1e63,PodSandboxId:bb1f25c6957565196c546377b7d2913764bff9c239cc9fda010971769f33cd95,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728297198964750501,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e7bbb5ed-1d5a-45fd-9651-7ca5a6f91cb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fc684a6c34b5cf39f6863f9936c98633ea5e90578c0eaa00c6752b90d7e4da,PodSandboxId:acf76b75684d7ba1d0f6b55d0c07f46d1c07c8e0e71d776ea41bd775153b6365,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728296666310611079,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-c7s9w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9120f7a1-c4be-4f72-9321-527c86c20f7a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ee70d1c010626be3d54cab50398d97658ab4de5c7f19c4472129d11bec0c7221,PodSandboxId:d6357ff1c85ae3e7e314c86cbacc5eef119ff90e590c2ccef938eeebc9c6c506,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Stat
e:CONTAINER_EXITED,CreatedAt:1728296650615527241,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-prnhv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 268277af-f9fb-41ed-9ded-767b57753a02,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667695d27c3d6d81f6c828372b58fc566e15ed1aaac47f18b573e96ac16ff0f1,PodSandboxId:2db0f16e30dd1d6d3f8a44248147d7cdf10975852d70a438fc2a90764918653b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52
b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728296650209572693,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwqck,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c752ea3b-173e-4104-bc6b-ac8bca59c752,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545cb0c2f3c144096e055e134148814f40dfc0123de566180be32a700e41a8cd,PodSandboxId:2624acffdea1face93e91ea9d89270e6d6db428e03b6b03bc35e3ff03e19e81b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728296622870041252,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5fpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b2974fc-b174-48a3-b7ed-5e1ae0743bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a5b30d1393661ff93d57f6e0c807589c6282f62f8e1a2a06102fdf2a9be791,PodSandboxId:c2d090a79b3bc6f97ca7f21548aa370546a8ee63f1fd2c687ccdc2df4546f846,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@s
ha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728296602986999631,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e17c292c-1ebb-47e8-9d91-4a32661ea133,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfcc864fd24ecec1bd47cefbd4862a5c81eea3a774c391eb6f52e1500765100,PodSandboxId:bccd3390c55ab91192232d439d657de3dc09f2068394a2677a044e8a980
3c959,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728296591548969829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6826cc-4ed3-43a6-9da7-09ba7c596ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:029ddf760b7bd2a215db7cbf9056dd1275da479a66e8ad2608684f502581c44d,PodSandboxId:a25a90d37dd076732b543ba31ceadba642ef939dff956343883b2ad760e5fcf0,Metada
ta:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728296590066275293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9wqp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab4529c-a075-4383-b45b-c26fa0aafe31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:3aaca1813e5539b66196ea5774ab5c182a73ff7a8ed072735219a702948d55ac,PodSandboxId:9906a009b58dd634a81ef943f81bf05a60d43213678ef2e6b2cbb0ffcb8cb477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728296587695085511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rgzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdd32a0-cf41-4a5e-ac6f-ccadb50f64a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:826c1551b95744975f0b33601394494e26c3b4f5d2091d500173f4728914c986,PodSandboxId:7e956d65a276ee9a87ec610dffeebc21b3de28c7763b72cc894c788870f24acd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728296575214010531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3280de8a0ade9371dcef72b1a227164,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0383
be6b6b5b9b2193201563ddebf4a644741e039dc1c955bff56cb164fdeadf,PodSandboxId:de1b43a3c9242e1b11932aa3066d226e6be95919ff72533121ccc4ecf72a691b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728296575179452435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e70c3afee1998dc9e1caef3b70aa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7cb7eae3cd94948ab76
89353da458bae337ace3f78d3461c27161a1fca6580,PodSandboxId:efeecc70750deaf6963bb55892616a75b2e64286f00593ad76c077290fa2185e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728296575169232182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6025046ac98cbf1dd7cc6b413b770d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54c8f73ed4745b552df307e9a411923ef0f4456ecb7acf57f12ed7a95f0bf13,Pod
SandboxId:907a02cce8b500eaa92e8619d47398fb7f3edc4928c3686aef9e744657a96dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728296575129740703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc0818acf7b1d7e7c120d88eb58bcac,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d69784ae-dbaa-4735-b412
-95e43013956f name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.350384685Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c2520a2-8d4f-4452-98b0-e944866e9e35 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.350558123Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c2520a2-8d4f-4452-98b0-e944866e9e35 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.351746678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d1ec172-8976-4651-ae78-2b59a58c5761 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.353301134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297337352842400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574201,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d1ec172-8976-4651-ae78-2b59a58c5761 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.354220287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b94b8009-9a5b-486b-9a6c-77b428f6f48b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.354311973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b94b8009-9a5b-486b-9a6c-77b428f6f48b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.354965534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04f28c679a3bb2acdb906c94443c0e8d1f68ba40a8476222c4fb9e17688ecf5d,PodSandboxId:b85936ceb669731e4a89ddd723f8ae1030e0398fbbc75e21b17f9f42dc58f149,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728297314430053246,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 418202f3-6a6f-41d5-bdd6-50b1f855a708,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d23e2bfca558d2e1f2c13dcfe86870650ccc6bfd84b66b7306113b89fae1e63,PodSandboxId:bb1f25c6957565196c546377b7d2913764bff9c239cc9fda010971769f33cd95,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728297198964750501,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e7bbb5ed-1d5a-45fd-9651-7ca5a6f91cb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fc684a6c34b5cf39f6863f9936c98633ea5e90578c0eaa00c6752b90d7e4da,PodSandboxId:acf76b75684d7ba1d0f6b55d0c07f46d1c07c8e0e71d776ea41bd775153b6365,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728296666310611079,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-c7s9w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9120f7a1-c4be-4f72-9321-527c86c20f7a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ee70d1c010626be3d54cab50398d97658ab4de5c7f19c4472129d11bec0c7221,PodSandboxId:d6357ff1c85ae3e7e314c86cbacc5eef119ff90e590c2ccef938eeebc9c6c506,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Stat
e:CONTAINER_EXITED,CreatedAt:1728296650615527241,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-prnhv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 268277af-f9fb-41ed-9ded-767b57753a02,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667695d27c3d6d81f6c828372b58fc566e15ed1aaac47f18b573e96ac16ff0f1,PodSandboxId:2db0f16e30dd1d6d3f8a44248147d7cdf10975852d70a438fc2a90764918653b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52
b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728296650209572693,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwqck,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c752ea3b-173e-4104-bc6b-ac8bca59c752,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545cb0c2f3c144096e055e134148814f40dfc0123de566180be32a700e41a8cd,PodSandboxId:2624acffdea1face93e91ea9d89270e6d6db428e03b6b03bc35e3ff03e19e81b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728296622870041252,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5fpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b2974fc-b174-48a3-b7ed-5e1ae0743bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a5b30d1393661ff93d57f6e0c807589c6282f62f8e1a2a06102fdf2a9be791,PodSandboxId:c2d090a79b3bc6f97ca7f21548aa370546a8ee63f1fd2c687ccdc2df4546f846,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@s
ha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728296602986999631,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e17c292c-1ebb-47e8-9d91-4a32661ea133,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfcc864fd24ecec1bd47cefbd4862a5c81eea3a774c391eb6f52e1500765100,PodSandboxId:bccd3390c55ab91192232d439d657de3dc09f2068394a2677a044e8a980
3c959,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728296591548969829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6826cc-4ed3-43a6-9da7-09ba7c596ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:029ddf760b7bd2a215db7cbf9056dd1275da479a66e8ad2608684f502581c44d,PodSandboxId:a25a90d37dd076732b543ba31ceadba642ef939dff956343883b2ad760e5fcf0,Metada
ta:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728296590066275293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9wqp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab4529c-a075-4383-b45b-c26fa0aafe31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:3aaca1813e5539b66196ea5774ab5c182a73ff7a8ed072735219a702948d55ac,PodSandboxId:9906a009b58dd634a81ef943f81bf05a60d43213678ef2e6b2cbb0ffcb8cb477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728296587695085511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rgzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdd32a0-cf41-4a5e-ac6f-ccadb50f64a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:826c1551b95744975f0b33601394494e26c3b4f5d2091d500173f4728914c986,PodSandboxId:7e956d65a276ee9a87ec610dffeebc21b3de28c7763b72cc894c788870f24acd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728296575214010531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3280de8a0ade9371dcef72b1a227164,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0383
be6b6b5b9b2193201563ddebf4a644741e039dc1c955bff56cb164fdeadf,PodSandboxId:de1b43a3c9242e1b11932aa3066d226e6be95919ff72533121ccc4ecf72a691b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728296575179452435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e70c3afee1998dc9e1caef3b70aa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7cb7eae3cd94948ab76
89353da458bae337ace3f78d3461c27161a1fca6580,PodSandboxId:efeecc70750deaf6963bb55892616a75b2e64286f00593ad76c077290fa2185e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728296575169232182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6025046ac98cbf1dd7cc6b413b770d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54c8f73ed4745b552df307e9a411923ef0f4456ecb7acf57f12ed7a95f0bf13,Pod
SandboxId:907a02cce8b500eaa92e8619d47398fb7f3edc4928c3686aef9e744657a96dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728296575129740703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc0818acf7b1d7e7c120d88eb58bcac,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b94b8009-9a5b-486b-9a6c
-77b428f6f48b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.394929586Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2297447e-82d8-467a-afe9-457aa2bc7385 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.395023755Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2297447e-82d8-467a-afe9-457aa2bc7385 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.396387182Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c75b5482-af99-4388-be8f-0a075cac9958 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.398142686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297337398113198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574201,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c75b5482-af99-4388-be8f-0a075cac9958 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.399107555Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b713e92a-62f3-423c-8d64-a522aa6ddb1a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.399168320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b713e92a-62f3-423c-8d64-a522aa6ddb1a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.399453957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04f28c679a3bb2acdb906c94443c0e8d1f68ba40a8476222c4fb9e17688ecf5d,PodSandboxId:b85936ceb669731e4a89ddd723f8ae1030e0398fbbc75e21b17f9f42dc58f149,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728297314430053246,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 418202f3-6a6f-41d5-bdd6-50b1f855a708,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d23e2bfca558d2e1f2c13dcfe86870650ccc6bfd84b66b7306113b89fae1e63,PodSandboxId:bb1f25c6957565196c546377b7d2913764bff9c239cc9fda010971769f33cd95,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728297198964750501,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e7bbb5ed-1d5a-45fd-9651-7ca5a6f91cb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fc684a6c34b5cf39f6863f9936c98633ea5e90578c0eaa00c6752b90d7e4da,PodSandboxId:acf76b75684d7ba1d0f6b55d0c07f46d1c07c8e0e71d776ea41bd775153b6365,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728296666310611079,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-c7s9w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9120f7a1-c4be-4f72-9321-527c86c20f7a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ee70d1c010626be3d54cab50398d97658ab4de5c7f19c4472129d11bec0c7221,PodSandboxId:d6357ff1c85ae3e7e314c86cbacc5eef119ff90e590c2ccef938eeebc9c6c506,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Stat
e:CONTAINER_EXITED,CreatedAt:1728296650615527241,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-prnhv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 268277af-f9fb-41ed-9ded-767b57753a02,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667695d27c3d6d81f6c828372b58fc566e15ed1aaac47f18b573e96ac16ff0f1,PodSandboxId:2db0f16e30dd1d6d3f8a44248147d7cdf10975852d70a438fc2a90764918653b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52
b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728296650209572693,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwqck,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c752ea3b-173e-4104-bc6b-ac8bca59c752,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545cb0c2f3c144096e055e134148814f40dfc0123de566180be32a700e41a8cd,PodSandboxId:2624acffdea1face93e91ea9d89270e6d6db428e03b6b03bc35e3ff03e19e81b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728296622870041252,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5fpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b2974fc-b174-48a3-b7ed-5e1ae0743bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a5b30d1393661ff93d57f6e0c807589c6282f62f8e1a2a06102fdf2a9be791,PodSandboxId:c2d090a79b3bc6f97ca7f21548aa370546a8ee63f1fd2c687ccdc2df4546f846,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@s
ha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728296602986999631,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e17c292c-1ebb-47e8-9d91-4a32661ea133,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfcc864fd24ecec1bd47cefbd4862a5c81eea3a774c391eb6f52e1500765100,PodSandboxId:bccd3390c55ab91192232d439d657de3dc09f2068394a2677a044e8a980
3c959,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728296591548969829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6826cc-4ed3-43a6-9da7-09ba7c596ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:029ddf760b7bd2a215db7cbf9056dd1275da479a66e8ad2608684f502581c44d,PodSandboxId:a25a90d37dd076732b543ba31ceadba642ef939dff956343883b2ad760e5fcf0,Metada
ta:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728296590066275293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9wqp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab4529c-a075-4383-b45b-c26fa0aafe31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:3aaca1813e5539b66196ea5774ab5c182a73ff7a8ed072735219a702948d55ac,PodSandboxId:9906a009b58dd634a81ef943f81bf05a60d43213678ef2e6b2cbb0ffcb8cb477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728296587695085511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rgzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdd32a0-cf41-4a5e-ac6f-ccadb50f64a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:826c1551b95744975f0b33601394494e26c3b4f5d2091d500173f4728914c986,PodSandboxId:7e956d65a276ee9a87ec610dffeebc21b3de28c7763b72cc894c788870f24acd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728296575214010531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3280de8a0ade9371dcef72b1a227164,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0383
be6b6b5b9b2193201563ddebf4a644741e039dc1c955bff56cb164fdeadf,PodSandboxId:de1b43a3c9242e1b11932aa3066d226e6be95919ff72533121ccc4ecf72a691b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728296575179452435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e70c3afee1998dc9e1caef3b70aa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7cb7eae3cd94948ab76
89353da458bae337ace3f78d3461c27161a1fca6580,PodSandboxId:efeecc70750deaf6963bb55892616a75b2e64286f00593ad76c077290fa2185e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728296575169232182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6025046ac98cbf1dd7cc6b413b770d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54c8f73ed4745b552df307e9a411923ef0f4456ecb7acf57f12ed7a95f0bf13,Pod
SandboxId:907a02cce8b500eaa92e8619d47398fb7f3edc4928c3686aef9e744657a96dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728296575129740703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc0818acf7b1d7e7c120d88eb58bcac,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b713e92a-62f3-423c-8d64
-a522aa6ddb1a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.439696895Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7dae11c8-5fc5-4fce-b046-20d1826fdee6 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.439769245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7dae11c8-5fc5-4fce-b046-20d1826fdee6 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.441112141Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8920b14f-cc25-4dc0-9205-d4495108c587 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.442406052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297337442374624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574201,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8920b14f-cc25-4dc0-9205-d4495108c587 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.443266235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1253498-16bb-4d73-a414-639a197fe6e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.443323963Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1253498-16bb-4d73-a414-639a197fe6e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:35:37 addons-681605 crio[664]: time="2024-10-07 10:35:37.443702285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04f28c679a3bb2acdb906c94443c0e8d1f68ba40a8476222c4fb9e17688ecf5d,PodSandboxId:b85936ceb669731e4a89ddd723f8ae1030e0398fbbc75e21b17f9f42dc58f149,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728297314430053246,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 418202f3-6a6f-41d5-bdd6-50b1f855a708,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d23e2bfca558d2e1f2c13dcfe86870650ccc6bfd84b66b7306113b89fae1e63,PodSandboxId:bb1f25c6957565196c546377b7d2913764bff9c239cc9fda010971769f33cd95,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728297198964750501,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e7bbb5ed-1d5a-45fd-9651-7ca5a6f91cb3,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7fc684a6c34b5cf39f6863f9936c98633ea5e90578c0eaa00c6752b90d7e4da,PodSandboxId:acf76b75684d7ba1d0f6b55d0c07f46d1c07c8e0e71d776ea41bd775153b6365,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728296666310611079,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-c7s9w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9120f7a1-c4be-4f72-9321-527c86c20f7a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ee70d1c010626be3d54cab50398d97658ab4de5c7f19c4472129d11bec0c7221,PodSandboxId:d6357ff1c85ae3e7e314c86cbacc5eef119ff90e590c2ccef938eeebc9c6c506,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Stat
e:CONTAINER_EXITED,CreatedAt:1728296650615527241,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-prnhv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 268277af-f9fb-41ed-9ded-767b57753a02,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667695d27c3d6d81f6c828372b58fc566e15ed1aaac47f18b573e96ac16ff0f1,PodSandboxId:2db0f16e30dd1d6d3f8a44248147d7cdf10975852d70a438fc2a90764918653b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52
b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728296650209572693,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jwqck,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c752ea3b-173e-4104-bc6b-ac8bca59c752,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545cb0c2f3c144096e055e134148814f40dfc0123de566180be32a700e41a8cd,PodSandboxId:2624acffdea1face93e91ea9d89270e6d6db428e03b6b03bc35e3ff03e19e81b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728296622870041252,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5fpj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b2974fc-b174-48a3-b7ed-5e1ae0743bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a5b30d1393661ff93d57f6e0c807589c6282f62f8e1a2a06102fdf2a9be791,PodSandboxId:c2d090a79b3bc6f97ca7f21548aa370546a8ee63f1fd2c687ccdc2df4546f846,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@s
ha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728296602986999631,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e17c292c-1ebb-47e8-9d91-4a32661ea133,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfcc864fd24ecec1bd47cefbd4862a5c81eea3a774c391eb6f52e1500765100,PodSandboxId:bccd3390c55ab91192232d439d657de3dc09f2068394a2677a044e8a980
3c959,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728296591548969829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6826cc-4ed3-43a6-9da7-09ba7c596ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:029ddf760b7bd2a215db7cbf9056dd1275da479a66e8ad2608684f502581c44d,PodSandboxId:a25a90d37dd076732b543ba31ceadba642ef939dff956343883b2ad760e5fcf0,Metada
ta:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728296590066275293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9wqp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab4529c-a075-4383-b45b-c26fa0aafe31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:3aaca1813e5539b66196ea5774ab5c182a73ff7a8ed072735219a702948d55ac,PodSandboxId:9906a009b58dd634a81ef943f81bf05a60d43213678ef2e6b2cbb0ffcb8cb477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728296587695085511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rgzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdd32a0-cf41-4a5e-ac6f-ccadb50f64a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:826c1551b95744975f0b33601394494e26c3b4f5d2091d500173f4728914c986,PodSandboxId:7e956d65a276ee9a87ec610dffeebc21b3de28c7763b72cc894c788870f24acd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728296575214010531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3280de8a0ade9371dcef72b1a227164,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0383
be6b6b5b9b2193201563ddebf4a644741e039dc1c955bff56cb164fdeadf,PodSandboxId:de1b43a3c9242e1b11932aa3066d226e6be95919ff72533121ccc4ecf72a691b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728296575179452435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e70c3afee1998dc9e1caef3b70aa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7cb7eae3cd94948ab76
89353da458bae337ace3f78d3461c27161a1fca6580,PodSandboxId:efeecc70750deaf6963bb55892616a75b2e64286f00593ad76c077290fa2185e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728296575169232182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6025046ac98cbf1dd7cc6b413b770d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54c8f73ed4745b552df307e9a411923ef0f4456ecb7acf57f12ed7a95f0bf13,Pod
SandboxId:907a02cce8b500eaa92e8619d47398fb7f3edc4928c3686aef9e744657a96dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728296575129740703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc0818acf7b1d7e7c120d88eb58bcac,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1253498-16bb-4d73-a414
-639a197fe6e5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	04f28c679a3bb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          23 seconds ago      Running             busybox                   0                   b85936ceb6697       busybox
	7d23e2bfca558       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   bb1f25c695756       nginx
	a7fc684a6c34b       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             11 minutes ago      Running             controller                0                   acf76b75684d7       ingress-nginx-controller-bc57996ff-c7s9w
	ee70d1c010626       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             11 minutes ago      Exited              patch                     1                   d6357ff1c85ae       ingress-nginx-admission-patch-prnhv
	667695d27c3d6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              create                    0                   2db0f16e30dd1       ingress-nginx-admission-create-jwqck
	545cb0c2f3c14       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        11 minutes ago      Running             metrics-server            0                   2624acffdea1f       metrics-server-84c5f94fbc-z5fpj
	49a5b30d13936       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             12 minutes ago      Running             minikube-ingress-dns      0                   c2d090a79b3bc       kube-ingress-dns-minikube
	1bfcc864fd24e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   bccd3390c55ab       storage-provisioner
	029ddf760b7bd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   a25a90d37dd07       coredns-7c65d6cfc9-9wqp6
	3aaca1813e553       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             12 minutes ago      Running             kube-proxy                0                   9906a009b58dd       kube-proxy-4rgzz
	826c1551b9574       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             12 minutes ago      Running             kube-scheduler            0                   7e956d65a276e       kube-scheduler-addons-681605
	0383be6b6b5b9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             12 minutes ago      Running             kube-apiserver            0                   de1b43a3c9242       kube-apiserver-addons-681605
	3f7cb7eae3cd9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             12 minutes ago      Running             etcd                      0                   efeecc70750de       etcd-addons-681605
	e54c8f73ed474       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             12 minutes ago      Running             kube-controller-manager   0                   907a02cce8b50       kube-controller-manager-addons-681605
	
	
	==> coredns [029ddf760b7bd2a215db7cbf9056dd1275da479a66e8ad2608684f502581c44d] <==
	[INFO] 10.244.0.7:43631 - 27531 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000132857s
	[INFO] 10.244.0.7:43631 - 42730 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000142756s
	[INFO] 10.244.0.7:43631 - 59702 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000074056s
	[INFO] 10.244.0.7:43631 - 25334 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000073284s
	[INFO] 10.244.0.7:43631 - 48632 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000054077s
	[INFO] 10.244.0.7:43631 - 18194 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000102123s
	[INFO] 10.244.0.7:43631 - 23222 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000083046s
	[INFO] 10.244.0.7:58291 - 36472 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000124534s
	[INFO] 10.244.0.7:58291 - 36215 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131536s
	[INFO] 10.244.0.7:34519 - 47741 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061551s
	[INFO] 10.244.0.7:34519 - 47524 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004833s
	[INFO] 10.244.0.7:37426 - 12909 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055431s
	[INFO] 10.244.0.7:37426 - 13115 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062973s
	[INFO] 10.244.0.7:39637 - 33901 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000101309s
	[INFO] 10.244.0.7:39637 - 33729 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00017764s
	[INFO] 10.244.0.21:38851 - 28375 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00047586s
	[INFO] 10.244.0.21:47197 - 26095 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000214995s
	[INFO] 10.244.0.21:58406 - 29109 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000112903s
	[INFO] 10.244.0.21:39716 - 9453 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123272s
	[INFO] 10.244.0.21:35159 - 21709 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000325064s
	[INFO] 10.244.0.21:55146 - 62487 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117228s
	[INFO] 10.244.0.21:44311 - 26767 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000981606s
	[INFO] 10.244.0.21:57046 - 40261 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001305651s
	[INFO] 10.244.0.24:54142 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000516836s
	[INFO] 10.244.0.24:38990 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000255279s
	
	
	==> describe nodes <==
	Name:               addons-681605
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-681605
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=addons-681605
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T10_23_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-681605
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:22:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-681605
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:35:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:35:36 +0000   Mon, 07 Oct 2024 10:22:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:35:36 +0000   Mon, 07 Oct 2024 10:22:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:35:36 +0000   Mon, 07 Oct 2024 10:22:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:35:36 +0000   Mon, 07 Oct 2024 10:23:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    addons-681605
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc4a97a9ccc44fb1820bdad40fc00e6e
	  System UUID:                fc4a97a9-ccc4-4fb1-820b-dad40fc00e6e
	  Boot ID:                    c2e14225-5056-4cd9-9cd9-6d2a7db5e673
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-2h846            0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-c7s9w    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-9wqp6                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-681605                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-681605                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-681605       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4rgzz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-681605                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-z5fpj             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-681605 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-681605 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-681605 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)  kubelet          Node addons-681605 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet          Node addons-681605 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node addons-681605 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12m                kubelet          Node addons-681605 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-681605 event: Registered Node addons-681605 in Controller
	
	
	==> dmesg <==
	[  +0.094394] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.757771] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +1.622593] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.005703] kauditd_printk_skb: 134 callbacks suppressed
	[  +5.522451] kauditd_printk_skb: 109 callbacks suppressed
	[  +5.497092] kauditd_printk_skb: 41 callbacks suppressed
	[ +23.469706] kauditd_printk_skb: 6 callbacks suppressed
	[Oct 7 10:24] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.589798] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.022451] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.730863] kauditd_printk_skb: 10 callbacks suppressed
	[  +8.870517] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.001595] kauditd_printk_skb: 16 callbacks suppressed
	[Oct 7 10:32] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.616888] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.034904] kauditd_printk_skb: 9 callbacks suppressed
	[Oct 7 10:33] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.634935] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.804073] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.063108] kauditd_printk_skb: 22 callbacks suppressed
	[  +9.807832] kauditd_printk_skb: 23 callbacks suppressed
	[  +7.911746] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.578594] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.618818] kauditd_printk_skb: 3 callbacks suppressed
	[Oct 7 10:35] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [3f7cb7eae3cd94948ab7689353da458bae337ace3f78d3461c27161a1fca6580] <==
	{"level":"warn","ts":"2024-10-07T10:24:09.964628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.206748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-07T10:24:09.964741Z","caller":"traceutil/trace.go:171","msg":"trace[544174874] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:1002; }","duration":"171.293776ms","start":"2024-10-07T10:24:09.793397Z","end":"2024-10-07T10:24:09.964691Z","steps":["trace[544174874] 'agreement among raft nodes before linearized reading'  (duration: 171.184731ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:24:17.938252Z","caller":"traceutil/trace.go:171","msg":"trace[1932228105] transaction","detail":"{read_only:false; response_revision:1076; number_of_response:1; }","duration":"190.072431ms","start":"2024-10-07T10:24:17.748165Z","end":"2024-10-07T10:24:17.938237Z","steps":["trace[1932228105] 'process raft request'  (duration: 189.918692ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:24:23.322156Z","caller":"traceutil/trace.go:171","msg":"trace[50191854] linearizableReadLoop","detail":"{readStateIndex:1142; appliedIndex:1141; }","duration":"449.261429ms","start":"2024-10-07T10:24:22.872871Z","end":"2024-10-07T10:24:23.322133Z","steps":["trace[50191854] 'read index received'  (duration: 449.12947ms)","trace[50191854] 'applied index is now lower than readState.Index'  (duration: 131.634µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T10:24:23.322247Z","caller":"traceutil/trace.go:171","msg":"trace[41529296] transaction","detail":"{read_only:false; response_revision:1105; number_of_response:1; }","duration":"456.168464ms","start":"2024-10-07T10:24:22.866072Z","end":"2024-10-07T10:24:23.322241Z","steps":["trace[41529296] 'process raft request'  (duration: 455.968224ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T10:24:23.322450Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.275342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T10:24:23.322529Z","caller":"traceutil/trace.go:171","msg":"trace[1969370765] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"194.366423ms","start":"2024-10-07T10:24:23.128154Z","end":"2024-10-07T10:24:23.322520Z","steps":["trace[1969370765] 'agreement among raft nodes before linearized reading'  (duration: 194.215474ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T10:24:23.322666Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"449.80948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T10:24:23.322683Z","caller":"traceutil/trace.go:171","msg":"trace[1093079039] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"449.83037ms","start":"2024-10-07T10:24:22.872848Z","end":"2024-10-07T10:24:23.322679Z","steps":["trace[1093079039] 'agreement among raft nodes before linearized reading'  (duration: 449.797367ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T10:24:23.322697Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T10:24:22.872815Z","time spent":"449.878291ms","remote":"127.0.0.1:41354","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-07T10:24:23.322840Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T10:24:22.866021Z","time spent":"456.248719ms","remote":"127.0.0.1:41420","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-681605\" mod_revision:1034 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-681605\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-681605\" > >"}
	{"level":"warn","ts":"2024-10-07T10:24:23.322951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.521881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T10:24:23.322968Z","caller":"traceutil/trace.go:171","msg":"trace[1979077163] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"154.541677ms","start":"2024-10-07T10:24:23.168422Z","end":"2024-10-07T10:24:23.322963Z","steps":["trace[1979077163] 'agreement among raft nodes before linearized reading'  (duration: 154.495316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T10:24:23.323069Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.112164ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:497"}
	{"level":"info","ts":"2024-10-07T10:24:23.323084Z","caller":"traceutil/trace.go:171","msg":"trace[1647348850] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1105; }","duration":"180.127431ms","start":"2024-10-07T10:24:23.142952Z","end":"2024-10-07T10:24:23.323079Z","steps":["trace[1647348850] 'agreement among raft nodes before linearized reading'  (duration: 180.074585ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T10:24:34.940375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.522226ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10522258548344374760 > lease_revoke:<id:1206926680e244f2>","response":"size:28"}
	{"level":"info","ts":"2024-10-07T10:25:05.928603Z","caller":"traceutil/trace.go:171","msg":"trace[1639356212] transaction","detail":"{read_only:false; response_revision:1253; number_of_response:1; }","duration":"274.769629ms","start":"2024-10-07T10:25:05.653809Z","end":"2024-10-07T10:25:05.928578Z","steps":["trace[1639356212] 'process raft request'  (duration: 274.281485ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:32:56.602471Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1507}
	{"level":"info","ts":"2024-10-07T10:32:56.636763Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1507,"took":"33.674116ms","hash":2970239771,"current-db-size-bytes":6414336,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3653632,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2024-10-07T10:32:56.636847Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2970239771,"revision":1507,"compact-revision":-1}
	{"level":"info","ts":"2024-10-07T10:33:24.021986Z","caller":"traceutil/trace.go:171","msg":"trace[981934162] transaction","detail":"{read_only:false; response_revision:2233; number_of_response:1; }","duration":"168.098685ms","start":"2024-10-07T10:33:23.853841Z","end":"2024-10-07T10:33:24.021940Z","steps":["trace[981934162] 'process raft request'  (duration: 167.913699ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:33:40.532899Z","caller":"traceutil/trace.go:171","msg":"trace[479196927] transaction","detail":"{read_only:false; response_revision:2312; number_of_response:1; }","duration":"248.468937ms","start":"2024-10-07T10:33:40.284413Z","end":"2024-10-07T10:33:40.532882Z","steps":["trace[479196927] 'process raft request'  (duration: 248.25212ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:33:40.534572Z","caller":"traceutil/trace.go:171","msg":"trace[794250427] linearizableReadLoop","detail":"{readStateIndex:2476; appliedIndex:2476; }","duration":"177.759273ms","start":"2024-10-07T10:33:40.356795Z","end":"2024-10-07T10:33:40.534555Z","steps":["trace[794250427] 'read index received'  (duration: 177.751934ms)","trace[794250427] 'applied index is now lower than readState.Index'  (duration: 6.388µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T10:33:40.534987Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.120919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-07T10:33:40.535041Z","caller":"traceutil/trace.go:171","msg":"trace[1440464097] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; response_count:0; response_revision:2312; }","duration":"178.260499ms","start":"2024-10-07T10:33:40.356773Z","end":"2024-10-07T10:33:40.535033Z","steps":["trace[1440464097] 'agreement among raft nodes before linearized reading'  (duration: 178.094448ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:35:37 up 13 min,  0 users,  load average: 0.55, 0.67, 0.43
	Linux addons-681605 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0383be6b6b5b9b2193201563ddebf4a644741e039dc1c955bff56cb164fdeadf] <==
	I1007 10:33:11.879247       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1007 10:33:12.087461       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.46.37"}
	E1007 10:33:40.177109       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1007 10:33:46.572267       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1007 10:33:48.274251       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 10:33:49.286418       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 10:33:50.295596       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 10:33:51.303059       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 10:33:52.314948       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 10:33:53.323332       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 10:33:54.330449       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1007 10:34:01.254208       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:34:01.254286       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:34:01.309033       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:34:01.309115       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:34:01.354686       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:34:01.354868       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:34:01.408879       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:34:01.411242       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:34:01.436955       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:34:01.437008       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1007 10:34:02.409398       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1007 10:34:02.437399       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1007 10:34:02.450750       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1007 10:35:36.270404       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.129.42"}
	
	
	==> kube-controller-manager [e54c8f73ed4745b552df307e9a411923ef0f4456ecb7acf57f12ed7a95f0bf13] <==
	I1007 10:34:12.573552       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W1007 10:34:16.489312       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:34:16.489356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:34:17.059729       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:34:17.060056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:34:22.077099       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:34:22.077223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:34:35.092986       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:34:35.093195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:34:36.080836       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:34:36.080972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:34:36.196525       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:34:36.196579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:34:48.430612       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:34:48.430734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:35:04.311798       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:35:04.311931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:35:13.695580       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:35:13.695787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:35:20.734930       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:35:20.735085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 10:35:36.067436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.982305ms"
	I1007 10:35:36.109785       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.21271ms"
	I1007 10:35:36.109873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.955µs"
	I1007 10:35:36.691442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-681605"
	
	
	==> kube-proxy [3aaca1813e5539b66196ea5774ab5c182a73ff7a8ed072735219a702948d55ac] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 10:23:10.882041       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 10:23:10.908065       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.71"]
	E1007 10:23:10.908138       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 10:23:11.040798       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 10:23:11.040830       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 10:23:11.040861       1 server_linux.go:169] "Using iptables Proxier"
	I1007 10:23:11.090272       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 10:23:11.090611       1 server.go:483] "Version info" version="v1.31.1"
	I1007 10:23:11.090623       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 10:23:11.124928       1 config.go:199] "Starting service config controller"
	I1007 10:23:11.124954       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 10:23:11.124988       1 config.go:105] "Starting endpoint slice config controller"
	I1007 10:23:11.124992       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 10:23:11.138925       1 config.go:328] "Starting node config controller"
	I1007 10:23:11.138957       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 10:23:11.225066       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 10:23:11.225130       1 shared_informer.go:320] Caches are synced for service config
	I1007 10:23:11.239294       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [826c1551b95744975f0b33601394494e26c3b4f5d2091d500173f4728914c986] <==
	W1007 10:22:57.993045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 10:22:57.993100       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:57.993230       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 10:22:57.993264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:57.993293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 10:22:57.993327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:58.840577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 10:22:58.840628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:58.896225       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 10:22:58.896360       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 10:22:58.904624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 10:22:58.905589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:59.063895       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 10:22:59.064057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:59.091379       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 10:22:59.091525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:59.103952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 10:22:59.104075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:59.135007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 10:22:59.136210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:59.152078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 10:22:59.153303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:59.217608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 10:22:59.217769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 10:23:00.885675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 10:35:36 addons-681605 kubelet[1197]: E1007 10:35:36.072166    1197 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f77291cb-f852-485b-a01d-5ac154aa94e2" containerName="local-path-provisioner"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: E1007 10:35:36.072237    1197 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef8f4f3f-592d-44e2-aa3f-3b372f01185d" containerName="node-driver-registrar"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: E1007 10:35:36.072276    1197 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8fce54f2-fd28-44c3-a9b2-ad5444cab831" containerName="task-pv-container"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: E1007 10:35:36.072319    1197 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef8f4f3f-592d-44e2-aa3f-3b372f01185d" containerName="hostpath"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: E1007 10:35:36.072353    1197 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5fedba4d-1b30-4b3a-904c-6e10b5381894" containerName="csi-resizer"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: E1007 10:35:36.072388    1197 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6a722b86-0d68-4a92-84a4-a1db2bff5162" containerName="csi-attacher"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: E1007 10:35:36.072421    1197 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef8f4f3f-592d-44e2-aa3f-3b372f01185d" containerName="liveness-probe"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: E1007 10:35:36.072456    1197 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0dd2b0be-649e-4fca-9448-171e927c841c" containerName="volume-snapshot-controller"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: E1007 10:35:36.072543    1197 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef8f4f3f-592d-44e2-aa3f-3b372f01185d" containerName="csi-provisioner"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: E1007 10:35:36.072580    1197 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee9f6a14-fe4d-479b-8bbd-cc70f937e384" containerName="volume-snapshot-controller"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: E1007 10:35:36.072615    1197 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef8f4f3f-592d-44e2-aa3f-3b372f01185d" containerName="csi-external-health-monitor-controller"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: E1007 10:35:36.072650    1197 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef8f4f3f-592d-44e2-aa3f-3b372f01185d" containerName="csi-snapshotter"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.072773    1197 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef8f4f3f-592d-44e2-aa3f-3b372f01185d" containerName="csi-snapshotter"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.072840    1197 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fce54f2-fd28-44c3-a9b2-ad5444cab831" containerName="task-pv-container"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.072871    1197 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee9f6a14-fe4d-479b-8bbd-cc70f937e384" containerName="volume-snapshot-controller"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.072901    1197 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef8f4f3f-592d-44e2-aa3f-3b372f01185d" containerName="node-driver-registrar"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.072930    1197 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef8f4f3f-592d-44e2-aa3f-3b372f01185d" containerName="hostpath"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.072961    1197 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef8f4f3f-592d-44e2-aa3f-3b372f01185d" containerName="liveness-probe"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.073014    1197 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef8f4f3f-592d-44e2-aa3f-3b372f01185d" containerName="csi-external-health-monitor-controller"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.073045    1197 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef8f4f3f-592d-44e2-aa3f-3b372f01185d" containerName="csi-provisioner"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.073079    1197 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd2b0be-649e-4fca-9448-171e927c841c" containerName="volume-snapshot-controller"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.073114    1197 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fedba4d-1b30-4b3a-904c-6e10b5381894" containerName="csi-resizer"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.073144    1197 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a722b86-0d68-4a92-84a4-a1db2bff5162" containerName="csi-attacher"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.073177    1197 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77291cb-f852-485b-a01d-5ac154aa94e2" containerName="local-path-provisioner"
	Oct 07 10:35:36 addons-681605 kubelet[1197]: I1007 10:35:36.123305    1197 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjckd\" (UniqueName: \"kubernetes.io/projected/90c07936-8a56-435e-9ff4-58db904243cb-kube-api-access-rjckd\") pod \"hello-world-app-55bf9c44b4-2h846\" (UID: \"90c07936-8a56-435e-9ff4-58db904243cb\") " pod="default/hello-world-app-55bf9c44b4-2h846"
	
	
	==> storage-provisioner [1bfcc864fd24ecec1bd47cefbd4862a5c81eea3a774c391eb6f52e1500765100] <==
	I1007 10:23:12.084017       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 10:23:12.118001       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 10:23:12.118081       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 10:23:12.139600       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 10:23:12.139776       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-681605_bbb4f3d7-b106-4fd3-89c1-3d0edb6e4805!
	I1007 10:23:12.140810       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1aa4b118-f719-4134-b3ce-bdfd82029301", APIVersion:"v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-681605_bbb4f3d7-b106-4fd3-89c1-3d0edb6e4805 became leader
	I1007 10:23:12.239902       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-681605_bbb4f3d7-b106-4fd3-89c1-3d0edb6e4805!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-681605 -n addons-681605
helpers_test.go:261: (dbg) Run:  kubectl --context addons-681605 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-2h846 ingress-nginx-admission-create-jwqck ingress-nginx-admission-patch-prnhv
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-681605 describe pod hello-world-app-55bf9c44b4-2h846 ingress-nginx-admission-create-jwqck ingress-nginx-admission-patch-prnhv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-681605 describe pod hello-world-app-55bf9c44b4-2h846 ingress-nginx-admission-create-jwqck ingress-nginx-admission-patch-prnhv: exit status 1 (70.851974ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-2h846
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-681605/192.168.39.71
	Start Time:       Mon, 07 Oct 2024 10:35:36 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjckd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rjckd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-2h846 to addons-681605
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jwqck" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-prnhv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-681605 describe pod hello-world-app-55bf9c44b4-2h846 ingress-nginx-admission-create-jwqck ingress-nginx-admission-patch-prnhv: exit status 1
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-681605 addons disable ingress-dns --alsologtostderr -v=1: (1.456026179s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 addons disable ingress --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-681605 addons disable ingress --alsologtostderr -v=1: (7.748050518s)
--- FAIL: TestAddons/parallel/Ingress (156.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (340.43s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.197805ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-z5fpj" [3b2974fc-b174-48a3-b7ed-5e1ae0743bb4] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007006841s
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (101.874538ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 9m48.594962573s

                                                
                                                
** /stderr **
I1007 10:32:53.596610   11096 retry.go:31] will retry after 1.616270171s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (65.860894ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 9m50.277442559s

                                                
                                                
** /stderr **
I1007 10:32:55.279328   11096 retry.go:31] will retry after 3.057682359s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (71.105232ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 9m53.407043063s

                                                
                                                
** /stderr **
I1007 10:32:58.408770   11096 retry.go:31] will retry after 9.959747589s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (64.891056ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 10m3.432195432s

                                                
                                                
** /stderr **
I1007 10:33:08.434012   11096 retry.go:31] will retry after 9.343463793s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (77.98905ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 10m12.854697426s

                                                
                                                
** /stderr **
I1007 10:33:17.856609   11096 retry.go:31] will retry after 16.313017802s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (67.65247ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 10m29.235730645s

                                                
                                                
** /stderr **
I1007 10:33:34.237726   11096 retry.go:31] will retry after 18.817729804s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (67.395125ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 10m48.121732956s

                                                
                                                
** /stderr **
I1007 10:33:53.123449   11096 retry.go:31] will retry after 49.279056556s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (65.361758ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 11m37.470567196s

                                                
                                                
** /stderr **
I1007 10:34:42.472555   11096 retry.go:31] will retry after 26.177506923s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (65.451477ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 12m3.714280175s

                                                
                                                
** /stderr **
I1007 10:35:08.715968   11096 retry.go:31] will retry after 1m16.780202892s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (64.282246ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 13m20.565158436s

                                                
                                                
** /stderr **
I1007 10:36:25.567012   11096 retry.go:31] will retry after 39.3197272s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (62.767492ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 13m59.953114518s

                                                
                                                
** /stderr **
I1007 10:37:04.955023   11096 retry.go:31] will retry after 35.001834109s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (61.656381ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 14m35.017503042s

                                                
                                                
** /stderr **
I1007 10:37:40.019226   11096 retry.go:31] will retry after 45.197425042s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-681605 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-681605 top pods -n kube-system: exit status 1 (62.632048ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9wqp6, age: 15m20.284280401s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-681605 -n addons-681605
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-681605 logs -n 25: (1.238080056s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-052891                                                                     | download-only-052891 | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC | 07 Oct 24 10:22 UTC |
	| delete  | -p download-only-484375                                                                     | download-only-484375 | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC | 07 Oct 24 10:22 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-079912 | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC |                     |
	|         | binary-mirror-079912                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43695                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-079912                                                                     | binary-mirror-079912 | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC | 07 Oct 24 10:22 UTC |
	| addons  | disable dashboard -p                                                                        | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC |                     |
	|         | addons-681605                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC |                     |
	|         | addons-681605                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-681605 --wait=true                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC | 07 Oct 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:24 UTC | 07 Oct 24 10:24 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:32 UTC | 07 Oct 24 10:32 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:32 UTC | 07 Oct 24 10:32 UTC |
	|         | -p addons-681605                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:32 UTC | 07 Oct 24 10:32 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-681605 ip                                                                            | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-681605 addons                                                                        | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-681605 addons                                                                        | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	|         | -p addons-681605                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-681605 ssh cat                                                                       | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:33 UTC |
	|         | /opt/local-path-provisioner/pvc-44bb06b3-65c8-40a0-8efe-d6acb8e8851b_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:34 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-681605 ssh curl -s                                                                   | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-681605 addons                                                                        | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:34 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-681605 addons                                                                        | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-681605 ip                                                                            | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:35 UTC | 07 Oct 24 10:35 UTC |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:35 UTC | 07 Oct 24 10:35 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-681605 addons disable                                                                | addons-681605        | jenkins | v1.34.0 | 07 Oct 24 10:35 UTC | 07 Oct 24 10:35 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:22:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:22:20.006721   11818 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:22:20.006838   11818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:22:20.006847   11818 out.go:358] Setting ErrFile to fd 2...
	I1007 10:22:20.006851   11818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:22:20.007049   11818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:22:20.007635   11818 out.go:352] Setting JSON to false
	I1007 10:22:20.008459   11818 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":234,"bootTime":1728296306,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:22:20.008564   11818 start.go:139] virtualization: kvm guest
	I1007 10:22:20.011046   11818 out.go:177] * [addons-681605] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:22:20.012623   11818 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:22:20.012645   11818 notify.go:220] Checking for updates...
	I1007 10:22:20.014995   11818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:22:20.016096   11818 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:22:20.017313   11818 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:22:20.018441   11818 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:22:20.019630   11818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:22:20.020888   11818 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:22:20.053491   11818 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 10:22:20.054760   11818 start.go:297] selected driver: kvm2
	I1007 10:22:20.054777   11818 start.go:901] validating driver "kvm2" against <nil>
	I1007 10:22:20.054789   11818 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:22:20.055478   11818 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:22:20.055566   11818 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:22:20.070619   11818 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:22:20.070666   11818 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 10:22:20.070904   11818 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:22:20.070935   11818 cni.go:84] Creating CNI manager for ""
	I1007 10:22:20.070975   11818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 10:22:20.070983   11818 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 10:22:20.071031   11818 start.go:340] cluster config:
	{Name:addons-681605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-681605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:22:20.071115   11818 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:22:20.072814   11818 out.go:177] * Starting "addons-681605" primary control-plane node in "addons-681605" cluster
	I1007 10:22:20.074390   11818 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:22:20.074448   11818 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 10:22:20.074461   11818 cache.go:56] Caching tarball of preloaded images
	I1007 10:22:20.074567   11818 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:22:20.074584   11818 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:22:20.074907   11818 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/config.json ...
	I1007 10:22:20.074934   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/config.json: {Name:mk0a3fe40c14a0f70ab6963b6c11a89bec5f8a19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:20.075130   11818 start.go:360] acquireMachinesLock for addons-681605: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:22:20.075199   11818 start.go:364] duration metric: took 48.355µs to acquireMachinesLock for "addons-681605"
	I1007 10:22:20.075227   11818 start.go:93] Provisioning new machine with config: &{Name:addons-681605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-681605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:22:20.075296   11818 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 10:22:20.077005   11818 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1007 10:22:20.077180   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:22:20.077240   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:22:20.091805   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I1007 10:22:20.092326   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:22:20.092891   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:22:20.092913   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:22:20.093244   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:22:20.093427   11818 main.go:141] libmachine: (addons-681605) Calling .GetMachineName
	I1007 10:22:20.093589   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:20.093724   11818 start.go:159] libmachine.API.Create for "addons-681605" (driver="kvm2")
	I1007 10:22:20.093755   11818 client.go:168] LocalClient.Create starting
	I1007 10:22:20.093789   11818 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:22:20.210324   11818 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:22:20.286119   11818 main.go:141] libmachine: Running pre-create checks...
	I1007 10:22:20.286143   11818 main.go:141] libmachine: (addons-681605) Calling .PreCreateCheck
	I1007 10:22:20.286613   11818 main.go:141] libmachine: (addons-681605) Calling .GetConfigRaw
	I1007 10:22:20.287099   11818 main.go:141] libmachine: Creating machine...
	I1007 10:22:20.287112   11818 main.go:141] libmachine: (addons-681605) Calling .Create
	I1007 10:22:20.287294   11818 main.go:141] libmachine: (addons-681605) Creating KVM machine...
	I1007 10:22:20.288556   11818 main.go:141] libmachine: (addons-681605) DBG | found existing default KVM network
	I1007 10:22:20.289274   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:20.289137   11840 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091f0}
	I1007 10:22:20.289308   11818 main.go:141] libmachine: (addons-681605) DBG | created network xml: 
	I1007 10:22:20.289327   11818 main.go:141] libmachine: (addons-681605) DBG | <network>
	I1007 10:22:20.289334   11818 main.go:141] libmachine: (addons-681605) DBG |   <name>mk-addons-681605</name>
	I1007 10:22:20.289339   11818 main.go:141] libmachine: (addons-681605) DBG |   <dns enable='no'/>
	I1007 10:22:20.289376   11818 main.go:141] libmachine: (addons-681605) DBG |   
	I1007 10:22:20.289406   11818 main.go:141] libmachine: (addons-681605) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 10:22:20.289430   11818 main.go:141] libmachine: (addons-681605) DBG |     <dhcp>
	I1007 10:22:20.289439   11818 main.go:141] libmachine: (addons-681605) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 10:22:20.289449   11818 main.go:141] libmachine: (addons-681605) DBG |     </dhcp>
	I1007 10:22:20.289456   11818 main.go:141] libmachine: (addons-681605) DBG |   </ip>
	I1007 10:22:20.289464   11818 main.go:141] libmachine: (addons-681605) DBG |   
	I1007 10:22:20.289470   11818 main.go:141] libmachine: (addons-681605) DBG | </network>
	I1007 10:22:20.289521   11818 main.go:141] libmachine: (addons-681605) DBG | 
	I1007 10:22:20.295027   11818 main.go:141] libmachine: (addons-681605) DBG | trying to create private KVM network mk-addons-681605 192.168.39.0/24...
	I1007 10:22:20.363665   11818 main.go:141] libmachine: (addons-681605) DBG | private KVM network mk-addons-681605 192.168.39.0/24 created
	I1007 10:22:20.363729   11818 main.go:141] libmachine: (addons-681605) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605 ...
	I1007 10:22:20.363756   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:20.363665   11840 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:22:20.363775   11818 main.go:141] libmachine: (addons-681605) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:22:20.363820   11818 main.go:141] libmachine: (addons-681605) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:22:20.622626   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:20.622453   11840 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa...
	I1007 10:22:20.764745   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:20.764586   11840 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/addons-681605.rawdisk...
	I1007 10:22:20.764774   11818 main.go:141] libmachine: (addons-681605) DBG | Writing magic tar header
	I1007 10:22:20.764788   11818 main.go:141] libmachine: (addons-681605) DBG | Writing SSH key tar header
	I1007 10:22:20.764800   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:20.764705   11840 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605 ...
	I1007 10:22:20.764812   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605
	I1007 10:22:20.764884   11818 main.go:141] libmachine: (addons-681605) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605 (perms=drwx------)
	I1007 10:22:20.764913   11818 main.go:141] libmachine: (addons-681605) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:22:20.764926   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:22:20.764940   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:22:20.764949   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:22:20.764958   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:22:20.764966   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:22:20.764975   11818 main.go:141] libmachine: (addons-681605) DBG | Checking permissions on dir: /home
	I1007 10:22:20.764985   11818 main.go:141] libmachine: (addons-681605) DBG | Skipping /home - not owner
	I1007 10:22:20.765042   11818 main.go:141] libmachine: (addons-681605) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:22:20.765071   11818 main.go:141] libmachine: (addons-681605) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:22:20.765081   11818 main.go:141] libmachine: (addons-681605) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:22:20.765088   11818 main.go:141] libmachine: (addons-681605) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:22:20.765098   11818 main.go:141] libmachine: (addons-681605) Creating domain...
	I1007 10:22:20.766018   11818 main.go:141] libmachine: (addons-681605) define libvirt domain using xml: 
	I1007 10:22:20.766050   11818 main.go:141] libmachine: (addons-681605) <domain type='kvm'>
	I1007 10:22:20.766059   11818 main.go:141] libmachine: (addons-681605)   <name>addons-681605</name>
	I1007 10:22:20.766071   11818 main.go:141] libmachine: (addons-681605)   <memory unit='MiB'>4000</memory>
	I1007 10:22:20.766082   11818 main.go:141] libmachine: (addons-681605)   <vcpu>2</vcpu>
	I1007 10:22:20.766089   11818 main.go:141] libmachine: (addons-681605)   <features>
	I1007 10:22:20.766097   11818 main.go:141] libmachine: (addons-681605)     <acpi/>
	I1007 10:22:20.766107   11818 main.go:141] libmachine: (addons-681605)     <apic/>
	I1007 10:22:20.766118   11818 main.go:141] libmachine: (addons-681605)     <pae/>
	I1007 10:22:20.766127   11818 main.go:141] libmachine: (addons-681605)     
	I1007 10:22:20.766137   11818 main.go:141] libmachine: (addons-681605)   </features>
	I1007 10:22:20.766152   11818 main.go:141] libmachine: (addons-681605)   <cpu mode='host-passthrough'>
	I1007 10:22:20.766163   11818 main.go:141] libmachine: (addons-681605)   
	I1007 10:22:20.766179   11818 main.go:141] libmachine: (addons-681605)   </cpu>
	I1007 10:22:20.766190   11818 main.go:141] libmachine: (addons-681605)   <os>
	I1007 10:22:20.766201   11818 main.go:141] libmachine: (addons-681605)     <type>hvm</type>
	I1007 10:22:20.766213   11818 main.go:141] libmachine: (addons-681605)     <boot dev='cdrom'/>
	I1007 10:22:20.766227   11818 main.go:141] libmachine: (addons-681605)     <boot dev='hd'/>
	I1007 10:22:20.766239   11818 main.go:141] libmachine: (addons-681605)     <bootmenu enable='no'/>
	I1007 10:22:20.766247   11818 main.go:141] libmachine: (addons-681605)   </os>
	I1007 10:22:20.766255   11818 main.go:141] libmachine: (addons-681605)   <devices>
	I1007 10:22:20.766262   11818 main.go:141] libmachine: (addons-681605)     <disk type='file' device='cdrom'>
	I1007 10:22:20.766289   11818 main.go:141] libmachine: (addons-681605)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/boot2docker.iso'/>
	I1007 10:22:20.766305   11818 main.go:141] libmachine: (addons-681605)       <target dev='hdc' bus='scsi'/>
	I1007 10:22:20.766314   11818 main.go:141] libmachine: (addons-681605)       <readonly/>
	I1007 10:22:20.766324   11818 main.go:141] libmachine: (addons-681605)     </disk>
	I1007 10:22:20.766335   11818 main.go:141] libmachine: (addons-681605)     <disk type='file' device='disk'>
	I1007 10:22:20.766348   11818 main.go:141] libmachine: (addons-681605)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:22:20.766363   11818 main.go:141] libmachine: (addons-681605)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/addons-681605.rawdisk'/>
	I1007 10:22:20.766377   11818 main.go:141] libmachine: (addons-681605)       <target dev='hda' bus='virtio'/>
	I1007 10:22:20.766388   11818 main.go:141] libmachine: (addons-681605)     </disk>
	I1007 10:22:20.766397   11818 main.go:141] libmachine: (addons-681605)     <interface type='network'>
	I1007 10:22:20.766410   11818 main.go:141] libmachine: (addons-681605)       <source network='mk-addons-681605'/>
	I1007 10:22:20.766420   11818 main.go:141] libmachine: (addons-681605)       <model type='virtio'/>
	I1007 10:22:20.766431   11818 main.go:141] libmachine: (addons-681605)     </interface>
	I1007 10:22:20.766444   11818 main.go:141] libmachine: (addons-681605)     <interface type='network'>
	I1007 10:22:20.766472   11818 main.go:141] libmachine: (addons-681605)       <source network='default'/>
	I1007 10:22:20.766491   11818 main.go:141] libmachine: (addons-681605)       <model type='virtio'/>
	I1007 10:22:20.766497   11818 main.go:141] libmachine: (addons-681605)     </interface>
	I1007 10:22:20.766514   11818 main.go:141] libmachine: (addons-681605)     <serial type='pty'>
	I1007 10:22:20.766522   11818 main.go:141] libmachine: (addons-681605)       <target port='0'/>
	I1007 10:22:20.766527   11818 main.go:141] libmachine: (addons-681605)     </serial>
	I1007 10:22:20.766534   11818 main.go:141] libmachine: (addons-681605)     <console type='pty'>
	I1007 10:22:20.766543   11818 main.go:141] libmachine: (addons-681605)       <target type='serial' port='0'/>
	I1007 10:22:20.766570   11818 main.go:141] libmachine: (addons-681605)     </console>
	I1007 10:22:20.766598   11818 main.go:141] libmachine: (addons-681605)     <rng model='virtio'>
	I1007 10:22:20.766613   11818 main.go:141] libmachine: (addons-681605)       <backend model='random'>/dev/random</backend>
	I1007 10:22:20.766620   11818 main.go:141] libmachine: (addons-681605)     </rng>
	I1007 10:22:20.766631   11818 main.go:141] libmachine: (addons-681605)     
	I1007 10:22:20.766639   11818 main.go:141] libmachine: (addons-681605)     
	I1007 10:22:20.766647   11818 main.go:141] libmachine: (addons-681605)   </devices>
	I1007 10:22:20.766655   11818 main.go:141] libmachine: (addons-681605) </domain>
	I1007 10:22:20.766663   11818 main.go:141] libmachine: (addons-681605) 
	I1007 10:22:20.772053   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a4:6e:99 in network default
	I1007 10:22:20.772584   11818 main.go:141] libmachine: (addons-681605) Ensuring networks are active...
	I1007 10:22:20.772605   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:20.773248   11818 main.go:141] libmachine: (addons-681605) Ensuring network default is active
	I1007 10:22:20.773530   11818 main.go:141] libmachine: (addons-681605) Ensuring network mk-addons-681605 is active
	I1007 10:22:20.773993   11818 main.go:141] libmachine: (addons-681605) Getting domain xml...
	I1007 10:22:20.774760   11818 main.go:141] libmachine: (addons-681605) Creating domain...
	I1007 10:22:22.161804   11818 main.go:141] libmachine: (addons-681605) Waiting to get IP...
	I1007 10:22:22.162554   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:22.162953   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:22.162994   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:22.162948   11840 retry.go:31] will retry after 302.185888ms: waiting for machine to come up
	I1007 10:22:22.466345   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:22.466811   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:22.466833   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:22.466769   11840 retry.go:31] will retry after 257.765553ms: waiting for machine to come up
	I1007 10:22:22.726158   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:22.726616   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:22.726647   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:22.726566   11840 retry.go:31] will retry after 409.131874ms: waiting for machine to come up
	I1007 10:22:23.137044   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:23.137411   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:23.137440   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:23.137398   11840 retry.go:31] will retry after 377.38954ms: waiting for machine to come up
	I1007 10:22:23.515929   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:23.516346   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:23.516381   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:23.516321   11840 retry.go:31] will retry after 503.053943ms: waiting for machine to come up
	I1007 10:22:24.020917   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:24.021331   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:24.021366   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:24.021289   11840 retry.go:31] will retry after 585.883351ms: waiting for machine to come up
	I1007 10:22:24.609003   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:24.609485   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:24.609509   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:24.609415   11840 retry.go:31] will retry after 975.976889ms: waiting for machine to come up
	I1007 10:22:25.587029   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:25.587445   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:25.587485   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:25.587380   11840 retry.go:31] will retry after 1.250631484s: waiting for machine to come up
	I1007 10:22:26.839409   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:26.839855   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:26.839884   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:26.839812   11840 retry.go:31] will retry after 1.518594311s: waiting for machine to come up
	I1007 10:22:28.360337   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:28.360732   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:28.360756   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:28.360671   11840 retry.go:31] will retry after 1.758664231s: waiting for machine to come up
	I1007 10:22:30.121081   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:30.121532   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:30.121562   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:30.121481   11840 retry.go:31] will retry after 1.798470244s: waiting for machine to come up
	I1007 10:22:31.922286   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:31.922746   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:31.922775   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:31.922711   11840 retry.go:31] will retry after 2.965673146s: waiting for machine to come up
	I1007 10:22:34.889581   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:34.889974   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:34.890009   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:34.889924   11840 retry.go:31] will retry after 3.598608124s: waiting for machine to come up
	I1007 10:22:38.490108   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:38.490436   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find current IP address of domain addons-681605 in network mk-addons-681605
	I1007 10:22:38.490457   11818 main.go:141] libmachine: (addons-681605) DBG | I1007 10:22:38.490398   11840 retry.go:31] will retry after 4.481598971s: waiting for machine to come up
	I1007 10:22:42.975128   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:42.975616   11818 main.go:141] libmachine: (addons-681605) Found IP for machine: 192.168.39.71
	I1007 10:22:42.975639   11818 main.go:141] libmachine: (addons-681605) Reserving static IP address...
	I1007 10:22:42.975675   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has current primary IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:42.975958   11818 main.go:141] libmachine: (addons-681605) DBG | unable to find host DHCP lease matching {name: "addons-681605", mac: "52:54:00:a3:aa:32", ip: "192.168.39.71"} in network mk-addons-681605
	I1007 10:22:43.049700   11818 main.go:141] libmachine: (addons-681605) DBG | Getting to WaitForSSH function...
	I1007 10:22:43.049728   11818 main.go:141] libmachine: (addons-681605) Reserved static IP address: 192.168.39.71
	I1007 10:22:43.049740   11818 main.go:141] libmachine: (addons-681605) Waiting for SSH to be available...
	I1007 10:22:43.052685   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.053145   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.053192   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.053297   11818 main.go:141] libmachine: (addons-681605) DBG | Using SSH client type: external
	I1007 10:22:43.053332   11818 main.go:141] libmachine: (addons-681605) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa (-rw-------)
	I1007 10:22:43.053370   11818 main.go:141] libmachine: (addons-681605) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:22:43.053388   11818 main.go:141] libmachine: (addons-681605) DBG | About to run SSH command:
	I1007 10:22:43.053399   11818 main.go:141] libmachine: (addons-681605) DBG | exit 0
	I1007 10:22:43.184086   11818 main.go:141] libmachine: (addons-681605) DBG | SSH cmd err, output: <nil>: 
	I1007 10:22:43.184381   11818 main.go:141] libmachine: (addons-681605) KVM machine creation complete!
	I1007 10:22:43.184746   11818 main.go:141] libmachine: (addons-681605) Calling .GetConfigRaw
	I1007 10:22:43.185320   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:43.185500   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:43.185632   11818 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:22:43.185647   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:22:43.186766   11818 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:22:43.186781   11818 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:22:43.186786   11818 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:22:43.186791   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.188950   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.189290   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.189318   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.189422   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:43.189608   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.189739   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.189900   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:43.190041   11818 main.go:141] libmachine: Using SSH client type: native
	I1007 10:22:43.190236   11818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1007 10:22:43.190251   11818 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:22:43.291777   11818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:22:43.291801   11818 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:22:43.291812   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.294213   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.294537   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.294562   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.294772   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:43.294949   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.295175   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.295301   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:43.295478   11818 main.go:141] libmachine: Using SSH client type: native
	I1007 10:22:43.295718   11818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1007 10:22:43.295733   11818 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:22:43.396977   11818 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:22:43.397053   11818 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:22:43.397073   11818 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:22:43.397090   11818 main.go:141] libmachine: (addons-681605) Calling .GetMachineName
	I1007 10:22:43.397361   11818 buildroot.go:166] provisioning hostname "addons-681605"
	I1007 10:22:43.397384   11818 main.go:141] libmachine: (addons-681605) Calling .GetMachineName
	I1007 10:22:43.397588   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.400281   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.400645   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.400671   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.400867   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:43.401066   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.401271   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.401411   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:43.401588   11818 main.go:141] libmachine: Using SSH client type: native
	I1007 10:22:43.401758   11818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1007 10:22:43.401771   11818 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-681605 && echo "addons-681605" | sudo tee /etc/hostname
	I1007 10:22:43.521706   11818 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-681605
	
	I1007 10:22:43.521739   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.524322   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.524627   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.524654   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.524789   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:43.524995   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.525178   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.525325   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:43.525481   11818 main.go:141] libmachine: Using SSH client type: native
	I1007 10:22:43.525650   11818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1007 10:22:43.525669   11818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-681605' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-681605/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-681605' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:22:43.637022   11818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:22:43.637049   11818 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:22:43.637098   11818 buildroot.go:174] setting up certificates
	I1007 10:22:43.637111   11818 provision.go:84] configureAuth start
	I1007 10:22:43.637127   11818 main.go:141] libmachine: (addons-681605) Calling .GetMachineName
	I1007 10:22:43.637381   11818 main.go:141] libmachine: (addons-681605) Calling .GetIP
	I1007 10:22:43.639967   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.640306   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.640332   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.640432   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.642670   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.643036   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.643069   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.643237   11818 provision.go:143] copyHostCerts
	I1007 10:22:43.643311   11818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:22:43.643472   11818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:22:43.643563   11818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:22:43.643638   11818 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.addons-681605 san=[127.0.0.1 192.168.39.71 addons-681605 localhost minikube]
	I1007 10:22:43.750599   11818 provision.go:177] copyRemoteCerts
	I1007 10:22:43.750651   11818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:22:43.750673   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.753388   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.753808   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.753836   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.754050   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:43.754243   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.754393   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:43.754507   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:22:43.834400   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:22:43.859950   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 10:22:43.885070   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 10:22:43.910338   11818 provision.go:87] duration metric: took 273.202528ms to configureAuth
	I1007 10:22:43.910370   11818 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:22:43.910568   11818 config.go:182] Loaded profile config "addons-681605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:22:43.910650   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:43.913827   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.914108   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:43.914135   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:43.914369   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:43.914539   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.914730   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:43.914830   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:43.914939   11818 main.go:141] libmachine: Using SSH client type: native
	I1007 10:22:43.915116   11818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1007 10:22:43.915136   11818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:22:44.135204   11818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:22:44.135231   11818 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:22:44.135241   11818 main.go:141] libmachine: (addons-681605) Calling .GetURL
	I1007 10:22:44.136402   11818 main.go:141] libmachine: (addons-681605) DBG | Using libvirt version 6000000
	I1007 10:22:44.138224   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.138526   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.138552   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.138724   11818 main.go:141] libmachine: Docker is up and running!
	I1007 10:22:44.138738   11818 main.go:141] libmachine: Reticulating splines...
	I1007 10:22:44.138746   11818 client.go:171] duration metric: took 24.044984593s to LocalClient.Create
	I1007 10:22:44.138771   11818 start.go:167] duration metric: took 24.045045516s to libmachine.API.Create "addons-681605"
	I1007 10:22:44.138792   11818 start.go:293] postStartSetup for "addons-681605" (driver="kvm2")
	I1007 10:22:44.138808   11818 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:22:44.138831   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:44.139042   11818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:22:44.139065   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:44.141175   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.141471   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.141493   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.141610   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:44.141779   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:44.141924   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:44.142041   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:22:44.224277   11818 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:22:44.228883   11818 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:22:44.228913   11818 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:22:44.228995   11818 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:22:44.229023   11818 start.go:296] duration metric: took 90.223432ms for postStartSetup
	I1007 10:22:44.229054   11818 main.go:141] libmachine: (addons-681605) Calling .GetConfigRaw
	I1007 10:22:44.229607   11818 main.go:141] libmachine: (addons-681605) Calling .GetIP
	I1007 10:22:44.232055   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.232454   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.232483   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.232687   11818 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/config.json ...
	I1007 10:22:44.232868   11818 start.go:128] duration metric: took 24.157562052s to createHost
	I1007 10:22:44.232893   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:44.234840   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.235159   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.235183   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.235319   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:44.235458   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:44.235571   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:44.235708   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:44.235867   11818 main.go:141] libmachine: Using SSH client type: native
	I1007 10:22:44.236060   11818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I1007 10:22:44.236072   11818 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:22:44.336691   11818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728296564.311086132
	
	I1007 10:22:44.336715   11818 fix.go:216] guest clock: 1728296564.311086132
	I1007 10:22:44.336722   11818 fix.go:229] Guest: 2024-10-07 10:22:44.311086132 +0000 UTC Remote: 2024-10-07 10:22:44.232882006 +0000 UTC m=+24.261967860 (delta=78.204126ms)
	I1007 10:22:44.336760   11818 fix.go:200] guest clock delta is within tolerance: 78.204126ms
	I1007 10:22:44.336768   11818 start.go:83] releasing machines lock for "addons-681605", held for 24.261553295s
	I1007 10:22:44.336791   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:44.337047   11818 main.go:141] libmachine: (addons-681605) Calling .GetIP
	I1007 10:22:44.339485   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.339938   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.339963   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.340129   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:44.340672   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:44.340819   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:22:44.340920   11818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:22:44.340973   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:44.341028   11818 ssh_runner.go:195] Run: cat /version.json
	I1007 10:22:44.341049   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:22:44.343366   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.343592   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.343627   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.343728   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.343760   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:44.343950   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:44.344119   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:44.344128   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:44.344146   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:44.344313   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:22:44.344318   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:22:44.344437   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:22:44.344569   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:22:44.344700   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:22:44.417367   11818 ssh_runner.go:195] Run: systemctl --version
	I1007 10:22:44.443721   11818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:22:44.606811   11818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:22:44.613290   11818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:22:44.613349   11818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:22:44.629602   11818 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:22:44.629633   11818 start.go:495] detecting cgroup driver to use...
	I1007 10:22:44.629695   11818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:22:44.646010   11818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:22:44.661078   11818 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:22:44.661140   11818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:22:44.675927   11818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:22:44.690323   11818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:22:44.802885   11818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:22:44.951049   11818 docker.go:233] disabling docker service ...
	I1007 10:22:44.951110   11818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:22:44.966695   11818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:22:44.980644   11818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:22:45.114859   11818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:22:45.237145   11818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:22:45.251806   11818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:22:45.271887   11818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:22:45.271957   11818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.283594   11818 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:22:45.283669   11818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.294919   11818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.306479   11818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.318053   11818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:22:45.329238   11818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.340723   11818 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.358754   11818 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:22:45.369559   11818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:22:45.381007   11818 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:22:45.381085   11818 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:22:45.395053   11818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:22:45.405374   11818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:22:45.515409   11818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:22:45.607675   11818 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:22:45.607770   11818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:22:45.612759   11818 start.go:563] Will wait 60s for crictl version
	I1007 10:22:45.612835   11818 ssh_runner.go:195] Run: which crictl
	I1007 10:22:45.616514   11818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:22:45.655593   11818 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:22:45.655737   11818 ssh_runner.go:195] Run: crio --version
	I1007 10:22:45.685092   11818 ssh_runner.go:195] Run: crio --version
	I1007 10:22:45.716243   11818 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:22:45.717528   11818 main.go:141] libmachine: (addons-681605) Calling .GetIP
	I1007 10:22:45.720057   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:45.720336   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:22:45.720359   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:22:45.720579   11818 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:22:45.724783   11818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:22:45.737582   11818 kubeadm.go:883] updating cluster {Name:addons-681605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-681605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:22:45.737732   11818 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:22:45.737793   11818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:22:45.770595   11818 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 10:22:45.770674   11818 ssh_runner.go:195] Run: which lz4
	I1007 10:22:45.774750   11818 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 10:22:45.778965   11818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 10:22:45.779003   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 10:22:47.101349   11818 crio.go:462] duration metric: took 1.326625678s to copy over tarball
	I1007 10:22:47.101414   11818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 10:22:49.233715   11818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132271512s)
	I1007 10:22:49.233743   11818 crio.go:469] duration metric: took 2.132367893s to extract the tarball
	I1007 10:22:49.233752   11818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 10:22:49.272079   11818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:22:49.314282   11818 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:22:49.314305   11818 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:22:49.314312   11818 kubeadm.go:934] updating node { 192.168.39.71 8443 v1.31.1 crio true true} ...
	I1007 10:22:49.314403   11818 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-681605 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-681605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:22:49.314471   11818 ssh_runner.go:195] Run: crio config
	I1007 10:22:49.359239   11818 cni.go:84] Creating CNI manager for ""
	I1007 10:22:49.359269   11818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 10:22:49.359280   11818 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:22:49.359301   11818 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-681605 NodeName:addons-681605 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:22:49.359452   11818 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-681605"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:22:49.359516   11818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:22:49.369815   11818 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:22:49.369880   11818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 10:22:49.379933   11818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 10:22:49.397084   11818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:22:49.414465   11818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1007 10:22:49.432657   11818 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I1007 10:22:49.436730   11818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:22:49.449941   11818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:22:49.581215   11818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:22:49.600064   11818 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605 for IP: 192.168.39.71
	I1007 10:22:49.600085   11818 certs.go:194] generating shared ca certs ...
	I1007 10:22:49.600101   11818 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:49.600256   11818 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:22:49.685062   11818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt ...
	I1007 10:22:49.685088   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt: {Name:mk1bebb0d608c2502f725269f89a728785649358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:49.685273   11818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key ...
	I1007 10:22:49.685287   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key: {Name:mk0484bf94e36afd146e1707e22e8856544b1d70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:49.685387   11818 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:22:49.937366   11818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt ...
	I1007 10:22:49.937416   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt: {Name:mkf3ac5044e36edbadc1cf9a4d070f939dedff0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:49.937594   11818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key ...
	I1007 10:22:49.937607   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key: {Name:mk6c83283b65147b1395a3e37054954c48d7f3ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:49.937698   11818 certs.go:256] generating profile certs ...
	I1007 10:22:49.937751   11818 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.key
	I1007 10:22:49.937765   11818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt with IP's: []
	I1007 10:22:50.097103   11818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt ...
	I1007 10:22:50.097132   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: {Name:mkaa706578292541c6064467734dda876cf7cce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:50.097290   11818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.key ...
	I1007 10:22:50.097300   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.key: {Name:mkf1fd7ffeb3ae97b2b345e6f9af0a37e79b50e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:50.097366   11818 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.key.e594ed4a
	I1007 10:22:50.097382   11818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.crt.e594ed4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.71]
	I1007 10:22:50.161850   11818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.crt.e594ed4a ...
	I1007 10:22:50.161876   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.crt.e594ed4a: {Name:mk71e3521cc2b54e782a3ffce378308ee1bc4559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:50.162064   11818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.key.e594ed4a ...
	I1007 10:22:50.162078   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.key.e594ed4a: {Name:mk97fd44afa94c167f3de4d0934f7fdfaeb7ebe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:50.162167   11818 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.crt.e594ed4a -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.crt
	I1007 10:22:50.162260   11818 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.key.e594ed4a -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.key
	I1007 10:22:50.162309   11818 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.key
	I1007 10:22:50.162326   11818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.crt with IP's: []
	I1007 10:22:50.260625   11818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.crt ...
	I1007 10:22:50.260655   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.crt: {Name:mkb3a910d79c1560c2afe1e9f4d499332cc60ecf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:50.260828   11818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.key ...
	I1007 10:22:50.260841   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.key: {Name:mk69bfb1e2fe73bdc6a9a3af51018d17128bc8b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:22:50.261044   11818 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:22:50.261083   11818 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:22:50.261106   11818 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:22:50.261132   11818 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:22:50.261751   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:22:50.290469   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:22:50.314694   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:22:50.349437   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:22:50.375136   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 10:22:50.400642   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:22:50.425015   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:22:50.449922   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:22:50.474970   11818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:22:50.500011   11818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:22:50.517368   11818 ssh_runner.go:195] Run: openssl version
	I1007 10:22:50.523142   11818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:22:50.534384   11818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:22:50.539014   11818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:22:50.539071   11818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:22:50.544846   11818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:22:50.555696   11818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:22:50.559827   11818 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:22:50.559876   11818 kubeadm.go:392] StartCluster: {Name:addons-681605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-681605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:22:50.559954   11818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:22:50.560034   11818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:22:50.596468   11818 cri.go:89] found id: ""
	I1007 10:22:50.596537   11818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 10:22:50.606334   11818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 10:22:50.619572   11818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 10:22:50.630860   11818 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 10:22:50.630885   11818 kubeadm.go:157] found existing configuration files:
	
	I1007 10:22:50.630937   11818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 10:22:50.640489   11818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 10:22:50.640583   11818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 10:22:50.651440   11818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 10:22:50.660909   11818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 10:22:50.660973   11818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 10:22:50.671386   11818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 10:22:50.680417   11818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 10:22:50.680474   11818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 10:22:50.689585   11818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 10:22:50.698694   11818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 10:22:50.698751   11818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 10:22:50.708077   11818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 10:22:50.756961   11818 kubeadm.go:310] W1007 10:22:50.738794     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:22:50.758578   11818 kubeadm.go:310] W1007 10:22:50.740773     814 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:22:50.861175   11818 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 10:23:01.504310   11818 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 10:23:01.504429   11818 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 10:23:01.504532   11818 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 10:23:01.504655   11818 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 10:23:01.504807   11818 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 10:23:01.504906   11818 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 10:23:01.506628   11818 out.go:235]   - Generating certificates and keys ...
	I1007 10:23:01.506732   11818 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 10:23:01.506829   11818 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 10:23:01.506930   11818 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 10:23:01.507012   11818 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 10:23:01.507090   11818 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 10:23:01.507158   11818 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 10:23:01.507229   11818 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 10:23:01.507404   11818 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-681605 localhost] and IPs [192.168.39.71 127.0.0.1 ::1]
	I1007 10:23:01.507481   11818 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 10:23:01.507655   11818 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-681605 localhost] and IPs [192.168.39.71 127.0.0.1 ::1]
	I1007 10:23:01.507748   11818 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 10:23:01.507839   11818 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 10:23:01.507904   11818 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 10:23:01.508000   11818 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 10:23:01.508050   11818 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 10:23:01.508131   11818 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 10:23:01.508210   11818 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 10:23:01.508299   11818 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 10:23:01.508374   11818 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 10:23:01.508462   11818 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 10:23:01.508561   11818 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 10:23:01.510171   11818 out.go:235]   - Booting up control plane ...
	I1007 10:23:01.510253   11818 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 10:23:01.510322   11818 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 10:23:01.510387   11818 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 10:23:01.510474   11818 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 10:23:01.510554   11818 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 10:23:01.510601   11818 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 10:23:01.510771   11818 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 10:23:01.510918   11818 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 10:23:01.511004   11818 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001994563s
	I1007 10:23:01.511095   11818 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 10:23:01.511177   11818 kubeadm.go:310] [api-check] The API server is healthy after 5.503427981s
	I1007 10:23:01.511323   11818 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 10:23:01.511437   11818 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 10:23:01.511506   11818 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 10:23:01.511663   11818 kubeadm.go:310] [mark-control-plane] Marking the node addons-681605 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 10:23:01.511711   11818 kubeadm.go:310] [bootstrap-token] Using token: ci493c.491qyxmhvgz2m1ga
	I1007 10:23:01.513203   11818 out.go:235]   - Configuring RBAC rules ...
	I1007 10:23:01.513316   11818 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 10:23:01.513390   11818 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 10:23:01.513515   11818 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 10:23:01.513622   11818 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 10:23:01.513734   11818 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 10:23:01.513814   11818 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 10:23:01.513915   11818 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 10:23:01.513952   11818 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 10:23:01.513990   11818 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 10:23:01.513996   11818 kubeadm.go:310] 
	I1007 10:23:01.514048   11818 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 10:23:01.514054   11818 kubeadm.go:310] 
	I1007 10:23:01.514140   11818 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 10:23:01.514148   11818 kubeadm.go:310] 
	I1007 10:23:01.514169   11818 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 10:23:01.514220   11818 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 10:23:01.514271   11818 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 10:23:01.514282   11818 kubeadm.go:310] 
	I1007 10:23:01.514331   11818 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 10:23:01.514337   11818 kubeadm.go:310] 
	I1007 10:23:01.514376   11818 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 10:23:01.514388   11818 kubeadm.go:310] 
	I1007 10:23:01.514433   11818 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 10:23:01.514499   11818 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 10:23:01.514561   11818 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 10:23:01.514566   11818 kubeadm.go:310] 
	I1007 10:23:01.514674   11818 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 10:23:01.514786   11818 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 10:23:01.514796   11818 kubeadm.go:310] 
	I1007 10:23:01.514898   11818 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ci493c.491qyxmhvgz2m1ga \
	I1007 10:23:01.515041   11818 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df \
	I1007 10:23:01.515066   11818 kubeadm.go:310] 	--control-plane 
	I1007 10:23:01.515070   11818 kubeadm.go:310] 
	I1007 10:23:01.515143   11818 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 10:23:01.515149   11818 kubeadm.go:310] 
	I1007 10:23:01.515241   11818 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ci493c.491qyxmhvgz2m1ga \
	I1007 10:23:01.515394   11818 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df 
	I1007 10:23:01.515407   11818 cni.go:84] Creating CNI manager for ""
	I1007 10:23:01.515419   11818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 10:23:01.517265   11818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 10:23:01.518443   11818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 10:23:01.537745   11818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 10:23:01.556142   11818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 10:23:01.556258   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:01.556278   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-681605 minikube.k8s.io/updated_at=2024_10_07T10_23_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=addons-681605 minikube.k8s.io/primary=true
	I1007 10:23:01.579297   11818 ops.go:34] apiserver oom_adj: -16
	I1007 10:23:01.693108   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:02.193713   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:02.693857   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:03.193876   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:03.693791   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:04.193848   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:04.693482   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:05.194078   11818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:23:05.277617   11818 kubeadm.go:1113] duration metric: took 3.721448408s to wait for elevateKubeSystemPrivileges
	I1007 10:23:05.277648   11818 kubeadm.go:394] duration metric: took 14.717774013s to StartCluster
	I1007 10:23:05.277663   11818 settings.go:142] acquiring lock: {Name:mk699f217216dbe513edf6a42c79fe85f8c20124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:23:05.277785   11818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:23:05.278239   11818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/kubeconfig: {Name:mkc8a5ce1dbafe55e056433fff5c065506f83346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:23:05.278460   11818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 10:23:05.278485   11818 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 10:23:05.278470   11818 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:23:05.278606   11818 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-681605"
	I1007 10:23:05.278619   11818 addons.go:69] Setting cloud-spanner=true in profile "addons-681605"
	I1007 10:23:05.278634   11818 addons.go:234] Setting addon cloud-spanner=true in "addons-681605"
	I1007 10:23:05.278600   11818 addons.go:69] Setting inspektor-gadget=true in profile "addons-681605"
	I1007 10:23:05.278657   11818 addons.go:234] Setting addon inspektor-gadget=true in "addons-681605"
	I1007 10:23:05.278665   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.278689   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.278695   11818 config.go:182] Loaded profile config "addons-681605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:23:05.278696   11818 addons.go:69] Setting gcp-auth=true in profile "addons-681605"
	I1007 10:23:05.278709   11818 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-681605"
	I1007 10:23:05.278743   11818 mustload.go:65] Loading cluster: addons-681605
	I1007 10:23:05.278753   11818 addons.go:69] Setting registry=true in profile "addons-681605"
	I1007 10:23:05.278744   11818 addons.go:69] Setting ingress=true in profile "addons-681605"
	I1007 10:23:05.278774   11818 addons.go:69] Setting storage-provisioner=true in profile "addons-681605"
	I1007 10:23:05.278791   11818 addons.go:234] Setting addon storage-provisioner=true in "addons-681605"
	I1007 10:23:05.278796   11818 addons.go:234] Setting addon ingress=true in "addons-681605"
	I1007 10:23:05.278634   11818 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-681605"
	I1007 10:23:05.278806   11818 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-681605"
	I1007 10:23:05.278823   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.278826   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.278743   11818 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-681605"
	I1007 10:23:05.279237   11818 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-681605"
	I1007 10:23:05.279285   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.278766   11818 addons.go:234] Setting addon registry=true in "addons-681605"
	I1007 10:23:05.279374   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.278588   11818 addons.go:69] Setting yakd=true in profile "addons-681605"
	I1007 10:23:05.278831   11818 addons.go:69] Setting volumesnapshots=true in profile "addons-681605"
	I1007 10:23:05.279565   11818 addons.go:234] Setting addon yakd=true in "addons-681605"
	I1007 10:23:05.279599   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.279602   11818 config.go:182] Loaded profile config "addons-681605": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:23:05.279615   11818 addons.go:234] Setting addon volumesnapshots=true in "addons-681605"
	I1007 10:23:05.279652   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.279868   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.279909   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.279904   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.279949   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.279144   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.280063   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.280085   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.280338   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.280374   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.279159   11818 addons.go:69] Setting ingress-dns=true in profile "addons-681605"
	I1007 10:23:05.280461   11818 addons.go:234] Setting addon ingress-dns=true in "addons-681605"
	I1007 10:23:05.279431   11818 addons.go:69] Setting volcano=true in profile "addons-681605"
	I1007 10:23:05.280468   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.279194   11818 addons.go:69] Setting default-storageclass=true in profile "addons-681605"
	I1007 10:23:05.280495   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.280490   11818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-681605"
	I1007 10:23:05.280494   11818 addons.go:234] Setting addon volcano=true in "addons-681605"
	I1007 10:23:05.280533   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.280584   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.278598   11818 addons.go:69] Setting metrics-server=true in profile "addons-681605"
	I1007 10:23:05.280612   11818 addons.go:234] Setting addon metrics-server=true in "addons-681605"
	I1007 10:23:05.280637   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.280654   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.280661   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.280661   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.280741   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.280910   11818 out.go:177] * Verifying Kubernetes components...
	I1007 10:23:05.281102   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.281128   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.281145   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.281171   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.281199   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.281200   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.281208   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.281132   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.281753   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.282141   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.282372   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.282403   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.282615   11818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:23:05.298593   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36657
	I1007 10:23:05.299068   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I1007 10:23:05.299272   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.299467   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.299783   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.299818   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.300078   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.300096   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.300183   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.300583   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.300793   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.300806   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.300835   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.300974   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I1007 10:23:05.302490   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.308519   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.308585   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.308881   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.308924   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.310616   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.310655   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.328995   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.329180   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I1007 10:23:05.329790   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.329834   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.330258   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.330988   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.331031   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.331563   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38179
	I1007 10:23:05.331953   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I1007 10:23:05.332194   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.332383   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.332653   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.332671   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.333086   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.333335   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.333354   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.333452   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.333698   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.333765   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.334903   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.334927   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.335276   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.335456   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.337420   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I1007 10:23:05.338260   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.338302   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.340173   11818 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-681605"
	I1007 10:23:05.340226   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.340606   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.340627   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.341776   11818 addons.go:234] Setting addon default-storageclass=true in "addons-681605"
	I1007 10:23:05.341817   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:05.342213   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.342232   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.342548   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.348181   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.348217   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.348785   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.349604   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.349688   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.350070   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
	I1007 10:23:05.350691   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.351469   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.351488   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.351836   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.352402   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.352440   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.359200   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I1007 10:23:05.359963   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.360581   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.360601   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.360955   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.361498   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.361535   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.361767   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34075
	I1007 10:23:05.362738   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.363380   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.363398   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.363783   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.364381   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.364420   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.364689   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I1007 10:23:05.365381   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.365959   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.365978   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.366441   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.367080   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.367118   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.370149   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I1007 10:23:05.371202   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.371763   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.371782   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.372198   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.372405   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.374128   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I1007 10:23:05.374340   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.376347   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.376514   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 10:23:05.377182   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.377201   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.378995   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.379030   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 10:23:05.380248   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 10:23:05.381879   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 10:23:05.383114   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 10:23:05.384242   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.384298   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.384519   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43793
	I1007 10:23:05.385063   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.385163   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I1007 10:23:05.385414   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 10:23:05.385491   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.385897   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
	I1007 10:23:05.385962   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.385978   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.386108   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.386125   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.386456   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.386474   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.386678   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.386771   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.386906   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.386926   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.387369   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.387408   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.387632   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.387755   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 10:23:05.388214   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.388275   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.390536   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1007 10:23:05.390538   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35523
	I1007 10:23:05.391087   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.391755   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 10:23:05.391780   11818 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 10:23:05.391802   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.392124   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.392143   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.392615   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.392819   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.394962   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I1007 10:23:05.395618   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.395716   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.396444   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39949
	I1007 10:23:05.396807   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.396579   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.397026   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.397043   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.397990   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36059
	I1007 10:23:05.397995   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.398173   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.398304   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.398822   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.398839   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.399231   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.399783   11818 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 10:23:05.400036   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44311
	I1007 10:23:05.402708   11818 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 10:23:05.404078   11818 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 10:23:05.404099   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 10:23:05.404123   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.407057   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36409
	I1007 10:23:05.407649   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.407736   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.408103   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I1007 10:23:05.408411   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.408428   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.408949   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.409029   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.409046   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.409589   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.409634   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.409798   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.409900   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.409941   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34287
	I1007 10:23:05.410093   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34467
	I1007 10:23:05.413956   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43849
	I1007 10:23:05.414146   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.414161   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.414213   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.414741   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
	I1007 10:23:05.416190   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.416234   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.416381   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.416465   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.416670   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.416709   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.416898   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.417031   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.417076   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.417089   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.417380   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.417395   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.417472   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.417613   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.417623   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.417680   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.418135   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.418150   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.418168   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.418141   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.418251   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.418292   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.418306   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.418318   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.418335   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.418909   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.418925   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.418976   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.419087   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.419113   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.419122   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.419221   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.419240   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.419271   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.419446   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.419877   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:05.419907   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:05.420294   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.420303   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.420484   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.421705   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.421932   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.422931   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.423909   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.424051   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.424284   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:05.424812   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:05.425213   11818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 10:23:05.425575   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.425637   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:05.425656   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:05.425663   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:05.425671   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:05.425677   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:05.425773   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.425871   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.426162   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:05.426182   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	W1007 10:23:05.426245   11818 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 10:23:05.427404   11818 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 10:23:05.427531   11818 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 10:23:05.427636   11818 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 10:23:05.427735   11818 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 10:23:05.428830   11818 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 10:23:05.428845   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 10:23:05.428864   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.428947   11818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 10:23:05.428995   11818 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 10:23:05.429186   11818 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 10:23:05.429205   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.429600   11818 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 10:23:05.429613   11818 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 10:23:05.429629   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.430449   11818 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 10:23:05.430465   11818 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 10:23:05.430483   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.430609   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38211
	I1007 10:23:05.431168   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.431371   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.431712   11818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 10:23:05.431802   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I1007 10:23:05.432096   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.432110   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.432497   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.432564   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.432751   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.432811   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.433152   11818 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 10:23:05.433170   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 10:23:05.433187   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.433341   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.433359   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.433375   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.433390   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.433791   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.433972   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.434044   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.434280   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.434434   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.434724   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.434785   11818 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 10:23:05.434964   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.434990   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.435124   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.435143   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.435330   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.435389   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.435497   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.435648   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.435904   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.435922   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.435952   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.436095   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.436256   11818 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 10:23:05.436272   11818 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 10:23:05.436287   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.436329   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.437357   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.438352   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.439444   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.439464   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.439502   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.439692   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.439913   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.440075   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.440340   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.441464   11818 out.go:177]   - Using image docker.io/busybox:stable
	I1007 10:23:05.441472   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.441496   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.441702   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.441767   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.441899   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.441920   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.442135   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.442180   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.442293   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.442362   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.442395   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.442526   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.442711   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.442966   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.443126   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.444301   11818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 10:23:05.444390   11818 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 10:23:05.446092   11818 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 10:23:05.446114   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 10:23:05.446137   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.446241   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I1007 10:23:05.446264   11818 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:23:05.446272   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 10:23:05.446283   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.446843   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.449147   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.449163   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.449554   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.449716   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.449755   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.450186   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.450195   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.450219   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.450403   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.450439   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.450462   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.450633   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.450678   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.450960   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.451003   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.451293   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.451587   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.451830   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.452104   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.454061   11818 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 10:23:05.454232   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I1007 10:23:05.454681   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.455169   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.455191   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.455684   11818 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 10:23:05.455702   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 10:23:05.455720   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.455815   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.455971   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:05.461669   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.461893   11818 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 10:23:05.461907   11818 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 10:23:05.461922   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.462722   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.463111   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.463129   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.463252   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.463383   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.463681   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.463799   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.464696   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
	I1007 10:23:05.464977   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.465118   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:05.465359   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.465383   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.465549   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.465705   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:05.465715   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:05.465774   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.465891   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.465971   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:05.466026   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:05.466275   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	W1007 10:23:05.466791   11818 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55582->192.168.39.71:22: read: connection reset by peer
	I1007 10:23:05.466815   11818 retry.go:31] will retry after 304.283437ms: ssh: handshake failed: read tcp 192.168.39.1:55582->192.168.39.71:22: read: connection reset by peer
	I1007 10:23:05.467610   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:05.469821   11818 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 10:23:05.471324   11818 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 10:23:05.471342   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 10:23:05.471361   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:05.474286   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.474752   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:05.474770   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:05.474941   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:05.475121   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:05.475258   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:05.475405   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	W1007 10:23:05.484193   11818 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55594->192.168.39.71:22: read: connection reset by peer
	I1007 10:23:05.484230   11818 retry.go:31] will retry after 231.103974ms: ssh: handshake failed: read tcp 192.168.39.1:55594->192.168.39.71:22: read: connection reset by peer
	I1007 10:23:05.696750   11818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:23:05.697236   11818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 10:23:05.822097   11818 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 10:23:05.822122   11818 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 10:23:05.835174   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 10:23:05.852988   11818 node_ready.go:35] waiting up to 6m0s for node "addons-681605" to be "Ready" ...
	I1007 10:23:05.869222   11818 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 10:23:05.869255   11818 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 10:23:05.877111   11818 node_ready.go:49] node "addons-681605" has status "Ready":"True"
	I1007 10:23:05.877143   11818 node_ready.go:38] duration metric: took 24.124157ms for node "addons-681605" to be "Ready" ...
	I1007 10:23:05.877156   11818 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:23:05.901571   11818 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59fw7" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:05.980924   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:23:06.005004   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 10:23:06.015870   11818 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 10:23:06.015890   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 10:23:06.018263   11818 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 10:23:06.018286   11818 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 10:23:06.019663   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 10:23:06.024066   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 10:23:06.063251   11818 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 10:23:06.063279   11818 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 10:23:06.079842   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 10:23:06.079867   11818 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 10:23:06.136943   11818 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 10:23:06.136966   11818 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 10:23:06.148171   11818 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 10:23:06.148191   11818 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 10:23:06.172027   11818 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 10:23:06.172053   11818 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 10:23:06.174059   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 10:23:06.175666   11818 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 10:23:06.175689   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 10:23:06.242171   11818 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 10:23:06.242200   11818 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 10:23:06.258124   11818 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 10:23:06.258148   11818 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 10:23:06.270416   11818 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 10:23:06.270438   11818 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 10:23:06.298648   11818 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 10:23:06.298675   11818 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 10:23:06.304149   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 10:23:06.304173   11818 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 10:23:06.357129   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 10:23:06.358451   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 10:23:06.453317   11818 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 10:23:06.453345   11818 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 10:23:06.456524   11818 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 10:23:06.456550   11818 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 10:23:06.468963   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 10:23:06.495116   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 10:23:06.495146   11818 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 10:23:06.506860   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 10:23:06.506889   11818 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 10:23:06.670359   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 10:23:06.670389   11818 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 10:23:06.679950   11818 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 10:23:06.679998   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 10:23:06.684830   11818 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 10:23:06.684856   11818 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 10:23:06.715425   11818 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 10:23:06.715449   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 10:23:06.818008   11818 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 10:23:06.818038   11818 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 10:23:06.856815   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 10:23:06.882407   11818 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 10:23:06.882448   11818 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 10:23:06.937670   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 10:23:07.095664   11818 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 10:23:07.095687   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 10:23:07.140207   11818 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 10:23:07.140236   11818 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 10:23:07.356350   11818 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 10:23:07.356379   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 10:23:07.379305   11818 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 10:23:07.379332   11818 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 10:23:07.522629   11818 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.82534643s)
	I1007 10:23:07.522666   11818 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 10:23:07.531379   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.696162237s)
	I1007 10:23:07.531453   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:07.531468   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:07.531797   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:07.531833   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:07.531848   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:07.531866   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:07.531874   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:07.532116   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:07.532133   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:07.532137   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:07.652094   11818 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 10:23:07.652120   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 10:23:07.657562   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 10:23:07.907915   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-59fw7" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:08.042230   11818 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-681605" context rescaled to 1 replicas
	I1007 10:23:08.053863   11818 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 10:23:08.053892   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 10:23:08.345320   11818 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 10:23:08.345348   11818 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 10:23:08.647604   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 10:23:09.628987   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.648023433s)
	I1007 10:23:09.629051   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:09.629067   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:09.629374   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:09.629397   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:09.629411   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:09.629419   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:09.629736   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:09.629760   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:09.910718   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-59fw7" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:12.489854   11818 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 10:23:12.489897   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:12.492796   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:12.493234   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:12.493262   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:12.493414   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:12.493616   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:12.493759   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:12.493895   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:12.510070   11818 pod_ready.go:93] pod "coredns-7c65d6cfc9-59fw7" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:12.510095   11818 pod_ready.go:82] duration metric: took 6.608495564s for pod "coredns-7c65d6cfc9-59fw7" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:12.510107   11818 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:12.841571   11818 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 10:23:12.968886   11818 addons.go:234] Setting addon gcp-auth=true in "addons-681605"
	I1007 10:23:12.968933   11818 host.go:66] Checking if "addons-681605" exists ...
	I1007 10:23:12.969328   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:12.969363   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:12.985171   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
	I1007 10:23:12.985610   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:12.986085   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:12.986103   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:12.986475   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:12.987066   11818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:23:12.987100   11818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:23:13.002726   11818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I1007 10:23:13.003191   11818 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:23:13.003681   11818 main.go:141] libmachine: Using API Version  1
	I1007 10:23:13.003707   11818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:23:13.004099   11818 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:23:13.004287   11818 main.go:141] libmachine: (addons-681605) Calling .GetState
	I1007 10:23:13.005718   11818 main.go:141] libmachine: (addons-681605) Calling .DriverName
	I1007 10:23:13.005996   11818 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 10:23:13.006023   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHHostname
	I1007 10:23:13.008789   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:13.009161   11818 main.go:141] libmachine: (addons-681605) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:aa:32", ip: ""} in network mk-addons-681605: {Iface:virbr1 ExpiryTime:2024-10-07 11:22:35 +0000 UTC Type:0 Mac:52:54:00:a3:aa:32 Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:addons-681605 Clientid:01:52:54:00:a3:aa:32}
	I1007 10:23:13.009188   11818 main.go:141] libmachine: (addons-681605) DBG | domain addons-681605 has defined IP address 192.168.39.71 and MAC address 52:54:00:a3:aa:32 in network mk-addons-681605
	I1007 10:23:13.009350   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHPort
	I1007 10:23:13.009502   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHKeyPath
	I1007 10:23:13.009623   11818 main.go:141] libmachine: (addons-681605) Calling .GetSSHUsername
	I1007 10:23:13.009774   11818 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/addons-681605/id_rsa Username:docker}
	I1007 10:23:14.127201   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.122158292s)
	I1007 10:23:14.127278   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127280   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.107593349s)
	I1007 10:23:14.127301   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127312   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127304   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127373   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.103276251s)
	I1007 10:23:14.127411   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127427   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127434   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.953348872s)
	I1007 10:23:14.127463   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127475   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127520   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.770363402s)
	I1007 10:23:14.127547   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127558   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127619   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.769145292s)
	I1007 10:23:14.127640   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127647   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127742   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.658750291s)
	I1007 10:23:14.127758   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127782   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127816   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.270967408s)
	I1007 10:23:14.127843   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.127855   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.127919   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.190221576s)
	W1007 10:23:14.127941   11818 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 10:23:14.127958   11818 retry.go:31] will retry after 158.71667ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 10:23:14.128056   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.47046227s)
	I1007 10:23:14.128072   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.128080   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.128544   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.128554   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.128567   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.128573   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.128575   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.128580   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.128583   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.128588   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.128594   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.128914   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.128927   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.128935   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.128952   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.128999   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129019   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129024   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129030   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.129036   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.129214   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129237   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129267   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129275   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.129281   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.129320   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129343   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129348   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129390   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.129396   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.129646   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129664   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129675   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129682   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129691   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129698   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129705   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.129712   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.129755   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129774   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129780   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129788   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.129794   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.129805   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129821   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129848   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129854   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129863   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129872   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129880   11818 addons.go:475] Verifying addon metrics-server=true in "addons-681605"
	I1007 10:23:14.129914   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.129933   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.129940   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.129946   11818 addons.go:475] Verifying addon ingress=true in "addons-681605"
	I1007 10:23:14.130151   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.130174   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.130180   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.131189   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.131242   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.131406   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.131447   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.131484   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.131524   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.131752   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.131767   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.131827   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.131849   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.131855   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.132702   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.132722   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.132730   11818 addons.go:475] Verifying addon registry=true in "addons-681605"
	I1007 10:23:14.133706   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.133730   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.134044   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.134498   11818 out.go:177] * Verifying ingress addon...
	I1007 10:23:14.136306   11818 out.go:177] * Verifying registry addon...
	I1007 10:23:14.136306   11818 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-681605 service yakd-dashboard -n yakd-dashboard
	
	I1007 10:23:14.136945   11818 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 10:23:14.138164   11818 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 10:23:14.153590   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.153610   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.153861   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.153878   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	W1007 10:23:14.153974   11818 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1007 10:23:14.155090   11818 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 10:23:14.155109   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:14.156392   11818 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 10:23:14.156407   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:14.159567   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:14.159583   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:14.159818   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:14.159843   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:14.159852   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:14.287814   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 10:23:14.515972   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:14.644858   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:14.645456   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:15.143280   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:15.242818   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:15.669468   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:15.670963   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.023318912s)
	I1007 10:23:15.671001   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:15.671021   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:15.671042   11818 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.665021816s)
	I1007 10:23:15.671270   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:15.671279   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:15.671295   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:15.671317   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:15.671329   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:15.671525   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:15.671538   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:15.671546   11818 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-681605"
	I1007 10:23:15.673267   11818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 10:23:15.674601   11818 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 10:23:15.675880   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:15.676213   11818 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 10:23:15.677147   11818 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 10:23:15.677333   11818 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 10:23:15.677348   11818 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 10:23:15.725075   11818 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 10:23:15.725114   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:15.753671   11818 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 10:23:15.753701   11818 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 10:23:15.873742   11818 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 10:23:15.873771   11818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 10:23:15.984058   11818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 10:23:16.142556   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:16.143054   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:16.181987   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:16.517735   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:16.642700   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:16.644830   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:16.682368   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:16.815360   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.52749728s)
	I1007 10:23:16.815421   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:16.815439   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:16.815728   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:16.815769   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:16.815727   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:16.815781   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:16.815874   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:16.816063   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:16.816065   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:16.816114   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:17.148120   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:17.148752   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:17.251345   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:17.375177   11818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.391071612s)
	I1007 10:23:17.375251   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:17.375273   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:17.375568   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:17.375592   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:17.375607   11818 main.go:141] libmachine: Making call to close driver server
	I1007 10:23:17.375618   11818 main.go:141] libmachine: (addons-681605) Calling .Close
	I1007 10:23:17.375819   11818 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:23:17.375836   11818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:23:17.375835   11818 main.go:141] libmachine: (addons-681605) DBG | Closing plugin on server side
	I1007 10:23:17.377716   11818 addons.go:475] Verifying addon gcp-auth=true in "addons-681605"
	I1007 10:23:17.379534   11818 out.go:177] * Verifying gcp-auth addon...
	I1007 10:23:17.381528   11818 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 10:23:17.430897   11818 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 10:23:17.430917   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:17.642784   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:17.642829   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:17.683115   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:17.886116   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:18.147327   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:18.147536   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:18.184808   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:18.387001   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:18.520419   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:18.643759   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:18.644075   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:18.681453   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:18.884530   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:19.142623   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:19.143559   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:19.182771   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:19.385165   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:19.642139   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:19.642164   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:19.682005   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:19.886167   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:20.142997   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:20.143104   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:20.181553   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:20.384904   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:20.642329   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:20.643074   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:20.682789   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:20.885857   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:21.018052   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:21.142420   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:21.142812   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:21.182775   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:21.385350   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:21.640903   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:21.641886   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:21.681498   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:21.885736   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:22.142253   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:22.142729   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:22.182293   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:22.386202   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:22.642385   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:22.643042   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:22.681963   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:22.885666   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:23.142423   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:23.142436   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:23.182345   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:23.385318   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:23.517634   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:23.644452   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:23.646085   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:23.682322   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:23.884780   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:24.141873   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:24.142168   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:24.181798   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:24.386060   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:24.641483   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:24.642963   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:24.682125   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:24.889277   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:25.143396   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:25.143601   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:25.182085   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:25.385464   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:25.640805   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:25.641874   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:25.681927   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:25.886914   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:26.016767   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:26.215536   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:26.216713   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:26.217083   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:26.385553   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:26.641375   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:26.641978   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:26.681796   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:26.885090   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:27.142297   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:27.143703   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:27.182643   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:27.385729   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:27.642468   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:27.642772   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:27.684354   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:27.885274   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:28.017706   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:28.141493   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:28.142190   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:28.182366   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:28.385267   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:28.642666   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:28.642774   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:28.681948   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:28.885491   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:29.140920   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:29.142180   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:29.182128   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:29.386708   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:29.641761   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:29.642695   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:29.682798   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:29.885081   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:30.140678   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:30.142532   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:30.182036   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:30.638434   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:30.642821   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:30.645103   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:30.647771   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:30.682215   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:30.886272   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:31.141793   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:31.142751   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:31.182194   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:31.386284   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:31.642183   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:31.643265   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:31.682033   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:31.885614   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:32.141947   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:32.142366   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:32.182150   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:32.386247   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:32.643336   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:32.645576   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:32.683112   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:32.885770   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:33.016463   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:33.141552   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:33.141848   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:33.181715   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:33.384458   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:33.641816   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:33.642600   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:33.681764   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:33.886028   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:34.142184   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:34.143160   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:34.182320   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:34.385629   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:34.640579   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:34.642581   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:34.681814   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:34.885758   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:35.142381   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:35.142773   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:35.183188   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:35.387334   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:35.517119   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:35.642177   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:35.642745   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:35.681958   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:35.885966   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:36.141279   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:36.142205   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:36.182081   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:36.385595   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:36.641171   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:36.642056   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:36.682689   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:36.885458   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:37.141655   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:37.141813   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:37.181278   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:37.384649   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:37.517799   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:37.642677   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:37.643075   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:37.683015   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:37.885389   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:38.141126   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:38.141719   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:38.181340   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:38.384516   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:38.641279   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:38.644121   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:38.682055   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:38.886437   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:39.141714   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:39.142113   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:39.182314   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:39.385826   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:39.642837   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:39.642973   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:39.681960   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:39.886040   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:40.016370   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:40.142465   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:40.142850   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:40.181079   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:40.385758   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:40.641990   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:40.642398   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:40.682214   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:40.886117   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:41.145828   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:41.147407   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:41.182601   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:41.386623   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:41.650673   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:41.651265   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:41.752547   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:41.885618   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:42.142009   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:42.142553   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:42.181895   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:42.385206   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:42.520396   11818 pod_ready.go:103] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"False"
	I1007 10:23:42.641347   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:42.643769   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:42.681744   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:42.885734   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:43.142976   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:43.143017   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:43.182541   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:43.386615   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:43.642749   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:43.642857   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:43.681540   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:43.885062   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:44.142054   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:44.142738   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:44.181646   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:44.390514   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:44.641319   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:44.642204   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:44.681959   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:44.886125   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:45.017289   11818 pod_ready.go:93] pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:45.017318   11818 pod_ready.go:82] duration metric: took 32.507202793s for pod "coredns-7c65d6cfc9-9wqp6" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.017330   11818 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.022011   11818 pod_ready.go:93] pod "etcd-addons-681605" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:45.022038   11818 pod_ready.go:82] duration metric: took 4.700937ms for pod "etcd-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.022052   11818 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.027119   11818 pod_ready.go:93] pod "kube-apiserver-addons-681605" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:45.027149   11818 pod_ready.go:82] duration metric: took 5.088063ms for pod "kube-apiserver-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.027160   11818 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.031962   11818 pod_ready.go:93] pod "kube-controller-manager-addons-681605" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:45.032007   11818 pod_ready.go:82] duration metric: took 4.837357ms for pod "kube-controller-manager-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.032020   11818 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4rgzz" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.037845   11818 pod_ready.go:93] pod "kube-proxy-4rgzz" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:45.037868   11818 pod_ready.go:82] duration metric: took 5.841055ms for pod "kube-proxy-4rgzz" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.037876   11818 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.143001   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:45.143362   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:45.184384   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:45.550610   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:45.551012   11818 pod_ready.go:93] pod "kube-scheduler-addons-681605" in "kube-system" namespace has status "Ready":"True"
	I1007 10:23:45.551032   11818 pod_ready.go:82] duration metric: took 513.14799ms for pod "kube-scheduler-addons-681605" in "kube-system" namespace to be "Ready" ...
	I1007 10:23:45.551042   11818 pod_ready.go:39] duration metric: took 39.673875264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:23:45.551062   11818 api_server.go:52] waiting for apiserver process to appear ...
	I1007 10:23:45.551125   11818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 10:23:45.574452   11818 api_server.go:72] duration metric: took 40.295870948s to wait for apiserver process to appear ...
	I1007 10:23:45.574478   11818 api_server.go:88] waiting for apiserver healthz status ...
	I1007 10:23:45.574498   11818 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I1007 10:23:45.579296   11818 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I1007 10:23:45.580699   11818 api_server.go:141] control plane version: v1.31.1
	I1007 10:23:45.580727   11818 api_server.go:131] duration metric: took 6.241356ms to wait for apiserver health ...
	I1007 10:23:45.580736   11818 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 10:23:45.619754   11818 system_pods.go:59] 17 kube-system pods found
	I1007 10:23:45.619785   11818 system_pods.go:61] "coredns-7c65d6cfc9-9wqp6" [aab4529c-a075-4383-b45b-c26fa0aafe31] Running
	I1007 10:23:45.619792   11818 system_pods.go:61] "csi-hostpath-attacher-0" [6a722b86-0d68-4a92-84a4-a1db2bff5162] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1007 10:23:45.619800   11818 system_pods.go:61] "csi-hostpath-resizer-0" [5fedba4d-1b30-4b3a-904c-6e10b5381894] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1007 10:23:45.619807   11818 system_pods.go:61] "csi-hostpathplugin-ckx6s" [ef8f4f3f-592d-44e2-aa3f-3b372f01185d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1007 10:23:45.619811   11818 system_pods.go:61] "etcd-addons-681605" [d5a3f208-5b5a-4a86-89b9-8f30e1b08fff] Running
	I1007 10:23:45.619815   11818 system_pods.go:61] "kube-apiserver-addons-681605" [5739ca02-93c2-4efc-b639-906fdcb4c6b9] Running
	I1007 10:23:45.619818   11818 system_pods.go:61] "kube-controller-manager-addons-681605" [37f7ee25-9813-4354-bb91-288c87feaa2e] Running
	I1007 10:23:45.619823   11818 system_pods.go:61] "kube-ingress-dns-minikube" [e17c292c-1ebb-47e8-9d91-4a32661ea133] Running
	I1007 10:23:45.619826   11818 system_pods.go:61] "kube-proxy-4rgzz" [dfdd32a0-cf41-4a5e-ac6f-ccadb50f64a7] Running
	I1007 10:23:45.619830   11818 system_pods.go:61] "kube-scheduler-addons-681605" [744a8102-3a53-4e53-9770-95bf8e08d7c5] Running
	I1007 10:23:45.619835   11818 system_pods.go:61] "metrics-server-84c5f94fbc-z5fpj" [3b2974fc-b174-48a3-b7ed-5e1ae0743bb4] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 10:23:45.619839   11818 system_pods.go:61] "nvidia-device-plugin-daemonset-5qr65" [50ebff62-241e-44a1-a190-cbc7791e17c6] Running
	I1007 10:23:45.619848   11818 system_pods.go:61] "registry-66c9cd494c-j5b9g" [16a6aecf-e13b-4534-83e7-70fdf57bd954] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1007 10:23:45.619855   11818 system_pods.go:61] "registry-proxy-tr9b7" [2c257dda-ca4a-4383-904e-6a600fa871bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1007 10:23:45.619863   11818 system_pods.go:61] "snapshot-controller-56fcc65765-68xj2" [ee9f6a14-fe4d-479b-8bbd-cc70f937e384] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 10:23:45.619872   11818 system_pods.go:61] "snapshot-controller-56fcc65765-jx5xc" [0dd2b0be-649e-4fca-9448-171e927c841c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 10:23:45.619875   11818 system_pods.go:61] "storage-provisioner" [be6826cc-4ed3-43a6-9da7-09ba7c596ecf] Running
	I1007 10:23:45.619881   11818 system_pods.go:74] duration metric: took 39.140308ms to wait for pod list to return data ...
	I1007 10:23:45.619888   11818 default_sa.go:34] waiting for default service account to be created ...
	I1007 10:23:45.651139   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:45.653483   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:45.686408   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:45.814844   11818 default_sa.go:45] found service account: "default"
	I1007 10:23:45.814869   11818 default_sa.go:55] duration metric: took 194.974633ms for default service account to be created ...
	I1007 10:23:45.814877   11818 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 10:23:45.885089   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:46.021093   11818 system_pods.go:86] 17 kube-system pods found
	I1007 10:23:46.021120   11818 system_pods.go:89] "coredns-7c65d6cfc9-9wqp6" [aab4529c-a075-4383-b45b-c26fa0aafe31] Running
	I1007 10:23:46.021128   11818 system_pods.go:89] "csi-hostpath-attacher-0" [6a722b86-0d68-4a92-84a4-a1db2bff5162] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1007 10:23:46.021135   11818 system_pods.go:89] "csi-hostpath-resizer-0" [5fedba4d-1b30-4b3a-904c-6e10b5381894] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1007 10:23:46.021142   11818 system_pods.go:89] "csi-hostpathplugin-ckx6s" [ef8f4f3f-592d-44e2-aa3f-3b372f01185d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1007 10:23:46.021146   11818 system_pods.go:89] "etcd-addons-681605" [d5a3f208-5b5a-4a86-89b9-8f30e1b08fff] Running
	I1007 10:23:46.021150   11818 system_pods.go:89] "kube-apiserver-addons-681605" [5739ca02-93c2-4efc-b639-906fdcb4c6b9] Running
	I1007 10:23:46.021154   11818 system_pods.go:89] "kube-controller-manager-addons-681605" [37f7ee25-9813-4354-bb91-288c87feaa2e] Running
	I1007 10:23:46.021158   11818 system_pods.go:89] "kube-ingress-dns-minikube" [e17c292c-1ebb-47e8-9d91-4a32661ea133] Running
	I1007 10:23:46.021161   11818 system_pods.go:89] "kube-proxy-4rgzz" [dfdd32a0-cf41-4a5e-ac6f-ccadb50f64a7] Running
	I1007 10:23:46.021164   11818 system_pods.go:89] "kube-scheduler-addons-681605" [744a8102-3a53-4e53-9770-95bf8e08d7c5] Running
	I1007 10:23:46.021169   11818 system_pods.go:89] "metrics-server-84c5f94fbc-z5fpj" [3b2974fc-b174-48a3-b7ed-5e1ae0743bb4] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 10:23:46.021174   11818 system_pods.go:89] "nvidia-device-plugin-daemonset-5qr65" [50ebff62-241e-44a1-a190-cbc7791e17c6] Running
	I1007 10:23:46.021182   11818 system_pods.go:89] "registry-66c9cd494c-j5b9g" [16a6aecf-e13b-4534-83e7-70fdf57bd954] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1007 10:23:46.021187   11818 system_pods.go:89] "registry-proxy-tr9b7" [2c257dda-ca4a-4383-904e-6a600fa871bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1007 10:23:46.021194   11818 system_pods.go:89] "snapshot-controller-56fcc65765-68xj2" [ee9f6a14-fe4d-479b-8bbd-cc70f937e384] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 10:23:46.021204   11818 system_pods.go:89] "snapshot-controller-56fcc65765-jx5xc" [0dd2b0be-649e-4fca-9448-171e927c841c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 10:23:46.021210   11818 system_pods.go:89] "storage-provisioner" [be6826cc-4ed3-43a6-9da7-09ba7c596ecf] Running
	I1007 10:23:46.021217   11818 system_pods.go:126] duration metric: took 206.33548ms to wait for k8s-apps to be running ...
	I1007 10:23:46.021225   11818 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 10:23:46.021265   11818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:23:46.066314   11818 system_svc.go:56] duration metric: took 45.07976ms WaitForService to wait for kubelet
	I1007 10:23:46.066339   11818 kubeadm.go:582] duration metric: took 40.787761566s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:23:46.066358   11818 node_conditions.go:102] verifying NodePressure condition ...
	I1007 10:23:46.141613   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:46.142248   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:46.182504   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:46.215223   11818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:23:46.215253   11818 node_conditions.go:123] node cpu capacity is 2
	I1007 10:23:46.215264   11818 node_conditions.go:105] duration metric: took 148.901881ms to run NodePressure ...
	I1007 10:23:46.215275   11818 start.go:241] waiting for startup goroutines ...
	I1007 10:23:46.386290   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:46.641642   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:46.641798   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:46.682782   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:46.885741   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:47.141898   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:47.142330   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:47.182035   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:47.385602   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:47.642686   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:47.643013   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:47.681204   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:47.885844   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:48.142148   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:48.142606   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:48.181450   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:48.385823   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:48.642441   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:48.642973   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:48.684325   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:48.885834   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:49.141944   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:49.144062   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:49.181676   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:49.385285   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:49.644109   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:49.644359   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:49.682403   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:49.885653   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:50.142167   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:50.143378   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:50.181934   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:50.387852   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:50.641575   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:50.643931   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:50.682043   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:50.955370   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:51.199294   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:51.199386   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:51.199678   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:51.385181   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:51.641371   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:51.641553   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:51.681296   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:51.885442   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:52.142942   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:52.143540   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:52.182446   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:52.385659   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:52.642174   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:52.643177   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:52.681826   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:52.885277   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:53.140898   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:53.141635   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:53.181520   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:53.385796   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:53.643302   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:53.643683   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:53.682723   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:53.885652   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:54.142017   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:54.142432   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:54.182521   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:54.384775   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:54.642218   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:54.643212   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:54.682639   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:54.885512   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:55.141958   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:55.142079   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:55.191577   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:55.385407   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:55.642172   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:55.642728   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:55.682467   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:55.886195   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:56.141256   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:56.142156   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:56.182053   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:56.388235   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:56.642036   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:56.642363   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:56.682602   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:56.885469   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:57.409842   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:57.410294   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:57.410920   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:57.411142   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:57.642459   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:57.642578   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:57.683180   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:57.886391   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:58.141439   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:58.141755   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:23:58.181137   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:58.389527   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:58.643022   11818 kapi.go:107] duration metric: took 44.504855053s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 10:23:58.643648   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:58.682370   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:58.885830   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:59.142455   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:59.183672   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:59.385976   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:23:59.642274   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:23:59.681600   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:23:59.885487   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:00.142042   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:00.182673   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:00.385248   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:00.642553   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:00.682802   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:00.884729   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:01.142407   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:01.181776   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:01.384652   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:01.642475   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:01.682262   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:01.885596   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:02.141495   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:02.182160   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:02.385858   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:02.642104   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:02.681540   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:02.885896   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:03.143203   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:03.181611   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:03.386928   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:03.646148   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:03.683011   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:03.885778   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:04.141515   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:04.182202   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:04.385721   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:04.641649   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:04.681698   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:04.884816   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:05.142090   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:05.181549   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:05.384481   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:05.642000   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:05.681171   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:05.885936   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:06.141880   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:06.183444   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:06.386290   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:06.641988   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:06.682244   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:06.885231   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:07.186517   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:07.187199   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:07.386047   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:07.642068   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:07.681525   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:07.885928   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:08.146111   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:08.247082   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:08.385761   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:08.657778   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:08.682194   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:08.885286   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:09.140723   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:09.182469   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:09.387743   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:09.641672   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:09.682177   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:09.977414   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:10.146762   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:10.194396   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:10.386102   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:10.647850   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:10.684413   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:10.887634   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:11.141464   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:11.182322   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:11.385349   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:11.641680   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:11.682454   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:11.886441   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:12.141717   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:12.182133   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:12.386602   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:12.644260   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:12.688154   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:12.890761   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:13.142384   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:13.181754   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:13.384843   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:13.642440   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:13.682243   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:13.886070   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:14.141223   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:14.181866   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:14.385258   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:14.641055   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:14.681372   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:14.884661   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:15.141622   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:15.182258   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:15.386002   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:15.643004   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:15.682097   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:15.891659   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:16.141898   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:16.180900   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:16.385613   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:16.641856   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:16.681229   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:16.885245   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:17.141108   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:17.181760   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:17.385865   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:17.641619   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:17.681994   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:17.951684   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:18.141898   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:18.182985   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:18.385391   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:18.642173   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:18.693371   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:18.888505   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:19.141865   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:19.182196   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:19.386199   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:19.640767   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:19.686134   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:19.886336   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:20.142001   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:20.183357   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:20.385673   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:20.641018   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:20.682323   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:20.885387   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:21.141229   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:21.182571   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:21.384907   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:21.642978   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:21.681532   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:21.885211   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:22.141370   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:22.182486   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:22.385142   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:22.642992   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:22.682112   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:23.338431   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:23.338891   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:23.340879   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:23.437948   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:23.656162   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:23.684080   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:23.885482   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:24.141710   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:24.184755   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:24.385420   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:24.641877   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:24.681988   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:24.885583   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:25.143976   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:25.182651   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:25.441714   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:25.641956   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:25.742841   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:25.885630   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:26.141679   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:26.182576   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:26.389724   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:26.642108   11818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:24:26.682921   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:26.903339   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:27.146594   11818 kapi.go:107] duration metric: took 1m13.009643159s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 10:24:27.186265   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:27.386598   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:27.681847   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:27.884987   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:28.182191   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:28.385934   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:28.682183   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:28.885119   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:29.182784   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:29.386059   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:29.683579   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:29.885799   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:30.181544   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:30.386059   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:30.681816   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:30.885334   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:31.182616   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:31.385209   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:31.682597   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:31.885969   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:32.182120   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:32.385909   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:32.682124   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:32.885514   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:33.181888   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:33.385849   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:33.682471   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:33.885847   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:24:34.183481   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:34.385816   11818 kapi.go:107] duration metric: took 1m17.004285943s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 10:24:34.387530   11818 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-681605 cluster.
	I1007 10:24:34.388910   11818 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 10:24:34.390138   11818 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 10:24:34.682378   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:35.182982   11818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:24:35.682626   11818 kapi.go:107] duration metric: took 1m20.005480924s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 10:24:35.684541   11818 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, cloud-spanner, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1007 10:24:35.685962   11818 addons.go:510] duration metric: took 1m30.407473402s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns metrics-server cloud-spanner inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1007 10:24:35.686000   11818 start.go:246] waiting for cluster config update ...
	I1007 10:24:35.686015   11818 start.go:255] writing updated cluster config ...
	I1007 10:24:35.686305   11818 ssh_runner.go:195] Run: rm -f paused
	I1007 10:24:35.738086   11818 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 10:24:35.740209   11818 out.go:177] * Done! kubectl is now configured to use "addons-681605" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 10:38:25 addons-681605 crio[664]: time="2024-10-07 10:38:25.985384761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297505985357945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c35379c1-c515-407c-9481-3824e6b224aa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:38:25 addons-681605 crio[664]: time="2024-10-07 10:38:25.986303152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1b16fa1-d6a0-47a4-ac31-debef82f8a75 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:38:25 addons-681605 crio[664]: time="2024-10-07 10:38:25.986389999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1b16fa1-d6a0-47a4-ac31-debef82f8a75 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:38:25 addons-681605 crio[664]: time="2024-10-07 10:38:25.987049769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0adf94a1de08a09ed9afbde95f15e599fc6df6b9c038e0e22742c6d47180623,PodSandboxId:640bdb15b75ed9cdbc333b463b067899a911e009744eb243289af7bd416487bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728297339307253135,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-2h846,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90c07936-8a56-435e-9ff4-58db904243cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f28c679a3bb2acdb906c94443c0e8d1f68ba40a8476222c4fb9e17688ecf5d,PodSandboxId:b85936ceb669731e4a89ddd723f8ae1030e0398fbbc75e21b17f9f42dc58f149,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728297314430053246,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 418202f3-6a6f-41d5-bdd6-50b1f855a708,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d23e2bfca558d2e1f2c13dcfe86870650ccc6bfd84b66b7306113b89fae1e63,PodSandboxId:bb1f25c6957565196c546377b7d2913764bff9c239cc9fda010971769f33cd95,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728297198964750501,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e7bbb5ed-1d5a-45fd-9651-7ca5a6f91cb3,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545cb0c2f3c144096e055e134148814f40dfc0123de566180be32a700e41a8cd,PodSandboxId:2624acffdea1face93e91ea9d89270e6d6db428e03b6b03bc35e3ff03e19e81b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728296622870041252,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5fpj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3b2974fc-b174-48a3-b7ed-5e1ae0743bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfcc864fd24ecec1bd47cefbd4862a5c81eea3a774c391eb6f52e1500765100,PodSandboxId:bccd3390c55ab91192232d439d657de3dc09f2068394a2677a044e8a9803c959,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728296591548969829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6826cc-4ed3-43a6-9da7-09ba7c596ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:029ddf760b7bd2a215db7cbf9056dd1275da479a66e8ad2608684f502581c44d,PodSandboxId:a25a90d37dd076732b543ba31ceadba642ef939dff956343883b2ad760e5fcf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728296590066275293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-9wqp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab4529c-a075-4383-b45b-c26fa0aafe31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aaca1813e5539b66196ea5774ab5c182a73ff7a8ed072735219a702948d55ac,PodSandboxId:9906a009b58dd634a81ef943f81bf05a60d43213678ef2e6b2cbb0ffcb8cb477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728296587695085511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rgzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdd32a0-cf41-4a5e-ac6f-ccadb50f64a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826c1551b95744975f0b33601394494e26c3b4f5d2091d500173f4728914c986,PodSandboxId:7e956d65a276ee9a87ec610dffeebc21b3de28c7763b72cc894c788870f24acd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954e
a63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728296575214010531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3280de8a0ade9371dcef72b1a227164,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0383be6b6b5b9b2193201563ddebf4a644741e039dc1c955bff56cb164fdeadf,PodSandboxId:de1b43a3c9242e1b11932aa3066d226e6be95919ff72533121ccc4ecf72a691b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,
State:CONTAINER_RUNNING,CreatedAt:1728296575179452435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e70c3afee1998dc9e1caef3b70aa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7cb7eae3cd94948ab7689353da458bae337ace3f78d3461c27161a1fca6580,PodSandboxId:efeecc70750deaf6963bb55892616a75b2e64286f00593ad76c077290fa2185e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,Cre
atedAt:1728296575169232182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6025046ac98cbf1dd7cc6b413b770d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54c8f73ed4745b552df307e9a411923ef0f4456ecb7acf57f12ed7a95f0bf13,PodSandboxId:907a02cce8b500eaa92e8619d47398fb7f3edc4928c3686aef9e744657a96dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728296575129740703,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc0818acf7b1d7e7c120d88eb58bcac,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1b16fa1-d6a0-47a4-ac31-debef82f8a75 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.031363390Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f87718fe-4fa1-4627-b9cf-5d1ff2b28bb8 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.031438977Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f87718fe-4fa1-4627-b9cf-5d1ff2b28bb8 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.033097812Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=120f1289-633a-4459-a118-3f3ba8b54c60 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.035122276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297506035091824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=120f1289-633a-4459-a118-3f3ba8b54c60 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.035742157Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c7fa115-80e8-4711-84d8-1e692a15e657 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.035823528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c7fa115-80e8-4711-84d8-1e692a15e657 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.036112541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0adf94a1de08a09ed9afbde95f15e599fc6df6b9c038e0e22742c6d47180623,PodSandboxId:640bdb15b75ed9cdbc333b463b067899a911e009744eb243289af7bd416487bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728297339307253135,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-2h846,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90c07936-8a56-435e-9ff4-58db904243cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f28c679a3bb2acdb906c94443c0e8d1f68ba40a8476222c4fb9e17688ecf5d,PodSandboxId:b85936ceb669731e4a89ddd723f8ae1030e0398fbbc75e21b17f9f42dc58f149,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728297314430053246,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 418202f3-6a6f-41d5-bdd6-50b1f855a708,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d23e2bfca558d2e1f2c13dcfe86870650ccc6bfd84b66b7306113b89fae1e63,PodSandboxId:bb1f25c6957565196c546377b7d2913764bff9c239cc9fda010971769f33cd95,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728297198964750501,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e7bbb5ed-1d5a-45fd-9651-7ca5a6f91cb3,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545cb0c2f3c144096e055e134148814f40dfc0123de566180be32a700e41a8cd,PodSandboxId:2624acffdea1face93e91ea9d89270e6d6db428e03b6b03bc35e3ff03e19e81b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728296622870041252,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5fpj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3b2974fc-b174-48a3-b7ed-5e1ae0743bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfcc864fd24ecec1bd47cefbd4862a5c81eea3a774c391eb6f52e1500765100,PodSandboxId:bccd3390c55ab91192232d439d657de3dc09f2068394a2677a044e8a9803c959,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728296591548969829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6826cc-4ed3-43a6-9da7-09ba7c596ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:029ddf760b7bd2a215db7cbf9056dd1275da479a66e8ad2608684f502581c44d,PodSandboxId:a25a90d37dd076732b543ba31ceadba642ef939dff956343883b2ad760e5fcf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728296590066275293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-9wqp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab4529c-a075-4383-b45b-c26fa0aafe31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aaca1813e5539b66196ea5774ab5c182a73ff7a8ed072735219a702948d55ac,PodSandboxId:9906a009b58dd634a81ef943f81bf05a60d43213678ef2e6b2cbb0ffcb8cb477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728296587695085511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rgzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdd32a0-cf41-4a5e-ac6f-ccadb50f64a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826c1551b95744975f0b33601394494e26c3b4f5d2091d500173f4728914c986,PodSandboxId:7e956d65a276ee9a87ec610dffeebc21b3de28c7763b72cc894c788870f24acd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954e
a63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728296575214010531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3280de8a0ade9371dcef72b1a227164,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0383be6b6b5b9b2193201563ddebf4a644741e039dc1c955bff56cb164fdeadf,PodSandboxId:de1b43a3c9242e1b11932aa3066d226e6be95919ff72533121ccc4ecf72a691b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,
State:CONTAINER_RUNNING,CreatedAt:1728296575179452435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e70c3afee1998dc9e1caef3b70aa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7cb7eae3cd94948ab7689353da458bae337ace3f78d3461c27161a1fca6580,PodSandboxId:efeecc70750deaf6963bb55892616a75b2e64286f00593ad76c077290fa2185e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,Cre
atedAt:1728296575169232182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6025046ac98cbf1dd7cc6b413b770d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54c8f73ed4745b552df307e9a411923ef0f4456ecb7acf57f12ed7a95f0bf13,PodSandboxId:907a02cce8b500eaa92e8619d47398fb7f3edc4928c3686aef9e744657a96dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728296575129740703,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc0818acf7b1d7e7c120d88eb58bcac,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c7fa115-80e8-4711-84d8-1e692a15e657 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.079369501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=adc2efd5-2a5f-4299-a214-44452a964fb5 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.079470882Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=adc2efd5-2a5f-4299-a214-44452a964fb5 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.081034091Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1386c8f6-4ca5-4b25-be07-492dfee428b7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.082377602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297506082350419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1386c8f6-4ca5-4b25-be07-492dfee428b7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.083108593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a697bb01-ec0e-438d-913b-4f75adecaed3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.083186094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a697bb01-ec0e-438d-913b-4f75adecaed3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.083419103Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0adf94a1de08a09ed9afbde95f15e599fc6df6b9c038e0e22742c6d47180623,PodSandboxId:640bdb15b75ed9cdbc333b463b067899a911e009744eb243289af7bd416487bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728297339307253135,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-2h846,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90c07936-8a56-435e-9ff4-58db904243cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f28c679a3bb2acdb906c94443c0e8d1f68ba40a8476222c4fb9e17688ecf5d,PodSandboxId:b85936ceb669731e4a89ddd723f8ae1030e0398fbbc75e21b17f9f42dc58f149,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728297314430053246,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 418202f3-6a6f-41d5-bdd6-50b1f855a708,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d23e2bfca558d2e1f2c13dcfe86870650ccc6bfd84b66b7306113b89fae1e63,PodSandboxId:bb1f25c6957565196c546377b7d2913764bff9c239cc9fda010971769f33cd95,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728297198964750501,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e7bbb5ed-1d5a-45fd-9651-7ca5a6f91cb3,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545cb0c2f3c144096e055e134148814f40dfc0123de566180be32a700e41a8cd,PodSandboxId:2624acffdea1face93e91ea9d89270e6d6db428e03b6b03bc35e3ff03e19e81b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728296622870041252,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5fpj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3b2974fc-b174-48a3-b7ed-5e1ae0743bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfcc864fd24ecec1bd47cefbd4862a5c81eea3a774c391eb6f52e1500765100,PodSandboxId:bccd3390c55ab91192232d439d657de3dc09f2068394a2677a044e8a9803c959,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728296591548969829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6826cc-4ed3-43a6-9da7-09ba7c596ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:029ddf760b7bd2a215db7cbf9056dd1275da479a66e8ad2608684f502581c44d,PodSandboxId:a25a90d37dd076732b543ba31ceadba642ef939dff956343883b2ad760e5fcf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728296590066275293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-9wqp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab4529c-a075-4383-b45b-c26fa0aafe31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aaca1813e5539b66196ea5774ab5c182a73ff7a8ed072735219a702948d55ac,PodSandboxId:9906a009b58dd634a81ef943f81bf05a60d43213678ef2e6b2cbb0ffcb8cb477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728296587695085511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rgzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdd32a0-cf41-4a5e-ac6f-ccadb50f64a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826c1551b95744975f0b33601394494e26c3b4f5d2091d500173f4728914c986,PodSandboxId:7e956d65a276ee9a87ec610dffeebc21b3de28c7763b72cc894c788870f24acd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954e
a63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728296575214010531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3280de8a0ade9371dcef72b1a227164,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0383be6b6b5b9b2193201563ddebf4a644741e039dc1c955bff56cb164fdeadf,PodSandboxId:de1b43a3c9242e1b11932aa3066d226e6be95919ff72533121ccc4ecf72a691b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,
State:CONTAINER_RUNNING,CreatedAt:1728296575179452435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e70c3afee1998dc9e1caef3b70aa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7cb7eae3cd94948ab7689353da458bae337ace3f78d3461c27161a1fca6580,PodSandboxId:efeecc70750deaf6963bb55892616a75b2e64286f00593ad76c077290fa2185e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,Cre
atedAt:1728296575169232182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6025046ac98cbf1dd7cc6b413b770d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54c8f73ed4745b552df307e9a411923ef0f4456ecb7acf57f12ed7a95f0bf13,PodSandboxId:907a02cce8b500eaa92e8619d47398fb7f3edc4928c3686aef9e744657a96dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728296575129740703,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc0818acf7b1d7e7c120d88eb58bcac,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a697bb01-ec0e-438d-913b-4f75adecaed3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.124121104Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3318e27f-64f5-4a66-b154-46b723d9c420 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.124215361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3318e27f-64f5-4a66-b154-46b723d9c420 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.125355183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=769e9ea3-05f5-45bf-a9cd-5964928b527d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.126616063Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297506126589978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=769e9ea3-05f5-45bf-a9cd-5964928b527d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.127327661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad673533-208c-43df-a38e-ff57cc854781 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.127404322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad673533-208c-43df-a38e-ff57cc854781 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:38:26 addons-681605 crio[664]: time="2024-10-07 10:38:26.127765670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0adf94a1de08a09ed9afbde95f15e599fc6df6b9c038e0e22742c6d47180623,PodSandboxId:640bdb15b75ed9cdbc333b463b067899a911e009744eb243289af7bd416487bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728297339307253135,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-2h846,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90c07936-8a56-435e-9ff4-58db904243cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f28c679a3bb2acdb906c94443c0e8d1f68ba40a8476222c4fb9e17688ecf5d,PodSandboxId:b85936ceb669731e4a89ddd723f8ae1030e0398fbbc75e21b17f9f42dc58f149,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728297314430053246,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 418202f3-6a6f-41d5-bdd6-50b1f855a708,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d23e2bfca558d2e1f2c13dcfe86870650ccc6bfd84b66b7306113b89fae1e63,PodSandboxId:bb1f25c6957565196c546377b7d2913764bff9c239cc9fda010971769f33cd95,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728297198964750501,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e7bbb5ed-1d5a-45fd-9651-7ca5a6f91cb3,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:545cb0c2f3c144096e055e134148814f40dfc0123de566180be32a700e41a8cd,PodSandboxId:2624acffdea1face93e91ea9d89270e6d6db428e03b6b03bc35e3ff03e19e81b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728296622870041252,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5fpj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3b2974fc-b174-48a3-b7ed-5e1ae0743bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfcc864fd24ecec1bd47cefbd4862a5c81eea3a774c391eb6f52e1500765100,PodSandboxId:bccd3390c55ab91192232d439d657de3dc09f2068394a2677a044e8a9803c959,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728296591548969829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6826cc-4ed3-43a6-9da7-09ba7c596ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:029ddf760b7bd2a215db7cbf9056dd1275da479a66e8ad2608684f502581c44d,PodSandboxId:a25a90d37dd076732b543ba31ceadba642ef939dff956343883b2ad760e5fcf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728296590066275293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-9wqp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aab4529c-a075-4383-b45b-c26fa0aafe31,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aaca1813e5539b66196ea5774ab5c182a73ff7a8ed072735219a702948d55ac,PodSandboxId:9906a009b58dd634a81ef943f81bf05a60d43213678ef2e6b2cbb0ffcb8cb477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728296587695085511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rgzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfdd32a0-cf41-4a5e-ac6f-ccadb50f64a7,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826c1551b95744975f0b33601394494e26c3b4f5d2091d500173f4728914c986,PodSandboxId:7e956d65a276ee9a87ec610dffeebc21b3de28c7763b72cc894c788870f24acd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954e
a63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728296575214010531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3280de8a0ade9371dcef72b1a227164,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0383be6b6b5b9b2193201563ddebf4a644741e039dc1c955bff56cb164fdeadf,PodSandboxId:de1b43a3c9242e1b11932aa3066d226e6be95919ff72533121ccc4ecf72a691b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,
State:CONTAINER_RUNNING,CreatedAt:1728296575179452435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e70c3afee1998dc9e1caef3b70aa0d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f7cb7eae3cd94948ab7689353da458bae337ace3f78d3461c27161a1fca6580,PodSandboxId:efeecc70750deaf6963bb55892616a75b2e64286f00593ad76c077290fa2185e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,Cre
atedAt:1728296575169232182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb6025046ac98cbf1dd7cc6b413b770d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e54c8f73ed4745b552df307e9a411923ef0f4456ecb7acf57f12ed7a95f0bf13,PodSandboxId:907a02cce8b500eaa92e8619d47398fb7f3edc4928c3686aef9e744657a96dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728296575129740703,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-681605,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc0818acf7b1d7e7c120d88eb58bcac,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad673533-208c-43df-a38e-ff57cc854781 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a0adf94a1de08       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   640bdb15b75ed       hello-world-app-55bf9c44b4-2h846
	04f28c679a3bb       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago       Running             busybox                   0                   b85936ceb6697       busybox
	7d23e2bfca558       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   bb1f25c695756       nginx
	545cb0c2f3c14       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago      Running             metrics-server            0                   2624acffdea1f       metrics-server-84c5f94fbc-z5fpj
	1bfcc864fd24e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   bccd3390c55ab       storage-provisioner
	029ddf760b7bd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   a25a90d37dd07       coredns-7c65d6cfc9-9wqp6
	3aaca1813e553       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   9906a009b58dd       kube-proxy-4rgzz
	826c1551b9574       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   7e956d65a276e       kube-scheduler-addons-681605
	0383be6b6b5b9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   de1b43a3c9242       kube-apiserver-addons-681605
	3f7cb7eae3cd9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   efeecc70750de       etcd-addons-681605
	e54c8f73ed474       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   907a02cce8b50       kube-controller-manager-addons-681605
	
	
	==> coredns [029ddf760b7bd2a215db7cbf9056dd1275da479a66e8ad2608684f502581c44d] <==
	[INFO] 10.244.0.20:49099 - 19725 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000114177s
	[INFO] 10.244.0.20:49099 - 53644 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098765s
	[INFO] 10.244.0.20:49099 - 5410 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073961s
	[INFO] 10.244.0.20:49099 - 38484 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00018831s
	[INFO] 10.244.0.20:40260 - 51722 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000115952s
	[INFO] 10.244.0.20:40260 - 64152 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069808s
	[INFO] 10.244.0.20:40260 - 46651 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056886s
	[INFO] 10.244.0.20:40260 - 611 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054799s
	[INFO] 10.244.0.20:40260 - 64941 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048337s
	[INFO] 10.244.0.20:40260 - 60796 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004776s
	[INFO] 10.244.0.20:40260 - 5770 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064533s
	[INFO] 10.244.0.20:58523 - 65409 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000106491s
	[INFO] 10.244.0.20:58523 - 50667 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000062148s
	[INFO] 10.244.0.20:58523 - 16534 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033744s
	[INFO] 10.244.0.20:58523 - 46394 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031666s
	[INFO] 10.244.0.20:46008 - 17093 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033507s
	[INFO] 10.244.0.20:58523 - 62223 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041603s
	[INFO] 10.244.0.20:58523 - 40374 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032092s
	[INFO] 10.244.0.20:58523 - 51446 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044258s
	[INFO] 10.244.0.20:46008 - 19604 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000104929s
	[INFO] 10.244.0.20:46008 - 9029 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000079554s
	[INFO] 10.244.0.20:46008 - 52007 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074321s
	[INFO] 10.244.0.20:46008 - 29031 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004094s
	[INFO] 10.244.0.20:46008 - 51808 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036861s
	[INFO] 10.244.0.20:46008 - 1347 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003962s
	
	
	==> describe nodes <==
	Name:               addons-681605
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-681605
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=addons-681605
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T10_23_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-681605
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:22:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-681605
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:38:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:36:07 +0000   Mon, 07 Oct 2024 10:22:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:36:07 +0000   Mon, 07 Oct 2024 10:22:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:36:07 +0000   Mon, 07 Oct 2024 10:22:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:36:07 +0000   Mon, 07 Oct 2024 10:23:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    addons-681605
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc4a97a9ccc44fb1820bdad40fc00e6e
	  System UUID:                fc4a97a9-ccc4-4fb1-820b-dad40fc00e6e
	  Boot ID:                    c2e14225-5056-4cd9-9cd9-6d2a7db5e673
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-2h846         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 coredns-7c65d6cfc9-9wqp6                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-681605                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-681605             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-681605    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-4rgzz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-681605             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-z5fpj          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-681605 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-681605 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-681605 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node addons-681605 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node addons-681605 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node addons-681605 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node addons-681605 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-681605 event: Registered Node addons-681605 in Controller
	
	
	==> dmesg <==
	[  +4.757771] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +1.622593] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.005703] kauditd_printk_skb: 134 callbacks suppressed
	[  +5.522451] kauditd_printk_skb: 109 callbacks suppressed
	[  +5.497092] kauditd_printk_skb: 41 callbacks suppressed
	[ +23.469706] kauditd_printk_skb: 6 callbacks suppressed
	[Oct 7 10:24] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.589798] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.022451] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.730863] kauditd_printk_skb: 10 callbacks suppressed
	[  +8.870517] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.001595] kauditd_printk_skb: 16 callbacks suppressed
	[Oct 7 10:32] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.616888] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.034904] kauditd_printk_skb: 9 callbacks suppressed
	[Oct 7 10:33] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.634935] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.804073] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.063108] kauditd_printk_skb: 22 callbacks suppressed
	[  +9.807832] kauditd_printk_skb: 23 callbacks suppressed
	[  +7.911746] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.578594] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.618818] kauditd_printk_skb: 3 callbacks suppressed
	[Oct 7 10:35] kauditd_printk_skb: 49 callbacks suppressed
	[ +27.162943] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [3f7cb7eae3cd94948ab7689353da458bae337ace3f78d3461c27161a1fca6580] <==
	{"level":"info","ts":"2024-10-07T10:24:23.322156Z","caller":"traceutil/trace.go:171","msg":"trace[50191854] linearizableReadLoop","detail":"{readStateIndex:1142; appliedIndex:1141; }","duration":"449.261429ms","start":"2024-10-07T10:24:22.872871Z","end":"2024-10-07T10:24:23.322133Z","steps":["trace[50191854] 'read index received'  (duration: 449.12947ms)","trace[50191854] 'applied index is now lower than readState.Index'  (duration: 131.634µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T10:24:23.322247Z","caller":"traceutil/trace.go:171","msg":"trace[41529296] transaction","detail":"{read_only:false; response_revision:1105; number_of_response:1; }","duration":"456.168464ms","start":"2024-10-07T10:24:22.866072Z","end":"2024-10-07T10:24:23.322241Z","steps":["trace[41529296] 'process raft request'  (duration: 455.968224ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T10:24:23.322450Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.275342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T10:24:23.322529Z","caller":"traceutil/trace.go:171","msg":"trace[1969370765] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"194.366423ms","start":"2024-10-07T10:24:23.128154Z","end":"2024-10-07T10:24:23.322520Z","steps":["trace[1969370765] 'agreement among raft nodes before linearized reading'  (duration: 194.215474ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T10:24:23.322666Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"449.80948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T10:24:23.322683Z","caller":"traceutil/trace.go:171","msg":"trace[1093079039] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"449.83037ms","start":"2024-10-07T10:24:22.872848Z","end":"2024-10-07T10:24:23.322679Z","steps":["trace[1093079039] 'agreement among raft nodes before linearized reading'  (duration: 449.797367ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T10:24:23.322697Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T10:24:22.872815Z","time spent":"449.878291ms","remote":"127.0.0.1:41354","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-07T10:24:23.322840Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T10:24:22.866021Z","time spent":"456.248719ms","remote":"127.0.0.1:41420","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-681605\" mod_revision:1034 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-681605\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-681605\" > >"}
	{"level":"warn","ts":"2024-10-07T10:24:23.322951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.521881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T10:24:23.322968Z","caller":"traceutil/trace.go:171","msg":"trace[1979077163] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"154.541677ms","start":"2024-10-07T10:24:23.168422Z","end":"2024-10-07T10:24:23.322963Z","steps":["trace[1979077163] 'agreement among raft nodes before linearized reading'  (duration: 154.495316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T10:24:23.323069Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.112164ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:497"}
	{"level":"info","ts":"2024-10-07T10:24:23.323084Z","caller":"traceutil/trace.go:171","msg":"trace[1647348850] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1105; }","duration":"180.127431ms","start":"2024-10-07T10:24:23.142952Z","end":"2024-10-07T10:24:23.323079Z","steps":["trace[1647348850] 'agreement among raft nodes before linearized reading'  (duration: 180.074585ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T10:24:34.940375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.522226ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10522258548344374760 > lease_revoke:<id:1206926680e244f2>","response":"size:28"}
	{"level":"info","ts":"2024-10-07T10:25:05.928603Z","caller":"traceutil/trace.go:171","msg":"trace[1639356212] transaction","detail":"{read_only:false; response_revision:1253; number_of_response:1; }","duration":"274.769629ms","start":"2024-10-07T10:25:05.653809Z","end":"2024-10-07T10:25:05.928578Z","steps":["trace[1639356212] 'process raft request'  (duration: 274.281485ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:32:56.602471Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1507}
	{"level":"info","ts":"2024-10-07T10:32:56.636763Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1507,"took":"33.674116ms","hash":2970239771,"current-db-size-bytes":6414336,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3653632,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2024-10-07T10:32:56.636847Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2970239771,"revision":1507,"compact-revision":-1}
	{"level":"info","ts":"2024-10-07T10:33:24.021986Z","caller":"traceutil/trace.go:171","msg":"trace[981934162] transaction","detail":"{read_only:false; response_revision:2233; number_of_response:1; }","duration":"168.098685ms","start":"2024-10-07T10:33:23.853841Z","end":"2024-10-07T10:33:24.021940Z","steps":["trace[981934162] 'process raft request'  (duration: 167.913699ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:33:40.532899Z","caller":"traceutil/trace.go:171","msg":"trace[479196927] transaction","detail":"{read_only:false; response_revision:2312; number_of_response:1; }","duration":"248.468937ms","start":"2024-10-07T10:33:40.284413Z","end":"2024-10-07T10:33:40.532882Z","steps":["trace[479196927] 'process raft request'  (duration: 248.25212ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:33:40.534572Z","caller":"traceutil/trace.go:171","msg":"trace[794250427] linearizableReadLoop","detail":"{readStateIndex:2476; appliedIndex:2476; }","duration":"177.759273ms","start":"2024-10-07T10:33:40.356795Z","end":"2024-10-07T10:33:40.534555Z","steps":["trace[794250427] 'read index received'  (duration: 177.751934ms)","trace[794250427] 'applied index is now lower than readState.Index'  (duration: 6.388µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T10:33:40.534987Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.120919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-07T10:33:40.535041Z","caller":"traceutil/trace.go:171","msg":"trace[1440464097] range","detail":"{range_begin:/registry/statefulsets/; range_end:/registry/statefulsets0; response_count:0; response_revision:2312; }","duration":"178.260499ms","start":"2024-10-07T10:33:40.356773Z","end":"2024-10-07T10:33:40.535033Z","steps":["trace[1440464097] 'agreement among raft nodes before linearized reading'  (duration: 178.094448ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:37:56.610368Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2028}
	{"level":"info","ts":"2024-10-07T10:37:56.633437Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2028,"took":"22.134839ms","hash":3357345336,"current-db-size-bytes":6414336,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":4681728,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-10-07T10:37:56.633587Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3357345336,"revision":2028,"compact-revision":1507}
	
	
	==> kernel <==
	 10:38:26 up 16 min,  0 users,  load average: 0.37, 0.51, 0.41
	Linux addons-681605 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0383be6b6b5b9b2193201563ddebf4a644741e039dc1c955bff56cb164fdeadf] <==
	I1007 10:33:11.879247       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1007 10:33:12.087461       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.46.37"}
	E1007 10:33:40.177109       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1007 10:33:46.572267       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1007 10:33:48.274251       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 10:33:49.286418       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 10:33:50.295596       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 10:33:51.303059       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 10:33:52.314948       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 10:33:53.323332       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 10:33:54.330449       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1007 10:34:01.254208       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:34:01.254286       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:34:01.309033       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:34:01.309115       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:34:01.354686       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:34:01.354868       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:34:01.408879       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:34:01.411242       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:34:01.436955       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:34:01.437008       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1007 10:34:02.409398       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1007 10:34:02.437399       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1007 10:34:02.450750       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1007 10:35:36.270404       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.129.42"}
	
	
	==> kube-controller-manager [e54c8f73ed4745b552df307e9a411923ef0f4456ecb7acf57f12ed7a95f0bf13] <==
	I1007 10:36:07.518697       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-681605"
	W1007 10:36:08.147437       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:36:08.147538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:36:10.230645       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:36:10.230803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:36:35.753374       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:36:35.753520       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:36:48.048590       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:36:48.048760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:36:50.297582       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:36:50.297632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:36:52.438870       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:36:52.439012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:37:21.784212       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:37:21.784287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:37:32.840448       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:37:32.840544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:37:36.218196       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:37:36.218252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:37:48.059999       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:37:48.060064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:38:15.594055       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:38:15.594113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:38:15.838001       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:38:15.838069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [3aaca1813e5539b66196ea5774ab5c182a73ff7a8ed072735219a702948d55ac] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 10:23:10.882041       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 10:23:10.908065       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.71"]
	E1007 10:23:10.908138       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 10:23:11.040798       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 10:23:11.040830       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 10:23:11.040861       1 server_linux.go:169] "Using iptables Proxier"
	I1007 10:23:11.090272       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 10:23:11.090611       1 server.go:483] "Version info" version="v1.31.1"
	I1007 10:23:11.090623       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 10:23:11.124928       1 config.go:199] "Starting service config controller"
	I1007 10:23:11.124954       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 10:23:11.124988       1 config.go:105] "Starting endpoint slice config controller"
	I1007 10:23:11.124992       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 10:23:11.138925       1 config.go:328] "Starting node config controller"
	I1007 10:23:11.138957       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 10:23:11.225066       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 10:23:11.225130       1 shared_informer.go:320] Caches are synced for service config
	I1007 10:23:11.239294       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [826c1551b95744975f0b33601394494e26c3b4f5d2091d500173f4728914c986] <==
	W1007 10:22:57.993045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 10:22:57.993100       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:57.993230       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 10:22:57.993264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:57.993293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 10:22:57.993327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:58.840577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 10:22:58.840628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:58.896225       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 10:22:58.896360       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 10:22:58.904624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 10:22:58.905589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:59.063895       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 10:22:59.064057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:59.091379       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 10:22:59.091525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:59.103952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 10:22:59.104075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:59.135007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 10:22:59.136210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:59.152078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 10:22:59.153303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:22:59.217608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 10:22:59.217769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 10:23:00.885675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 10:37:00 addons-681605 kubelet[1197]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 10:37:01 addons-681605 kubelet[1197]: E1007 10:37:01.380604    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297421380208568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:37:01 addons-681605 kubelet[1197]: E1007 10:37:01.380665    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297421380208568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:37:11 addons-681605 kubelet[1197]: E1007 10:37:11.383125    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297431382879129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:37:11 addons-681605 kubelet[1197]: E1007 10:37:11.383167    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297431382879129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:37:21 addons-681605 kubelet[1197]: E1007 10:37:21.385704    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297441385326295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:37:21 addons-681605 kubelet[1197]: E1007 10:37:21.385982    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297441385326295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:37:31 addons-681605 kubelet[1197]: E1007 10:37:31.392755    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297451388609984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:37:31 addons-681605 kubelet[1197]: E1007 10:37:31.392843    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297451388609984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:37:41 addons-681605 kubelet[1197]: E1007 10:37:41.396037    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297461395596866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:37:41 addons-681605 kubelet[1197]: E1007 10:37:41.396345    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297461395596866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:37:51 addons-681605 kubelet[1197]: E1007 10:37:51.398583    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297471398164655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:37:51 addons-681605 kubelet[1197]: E1007 10:37:51.398630    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297471398164655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:38:00 addons-681605 kubelet[1197]: E1007 10:38:00.905883    1197 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 10:38:00 addons-681605 kubelet[1197]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 10:38:00 addons-681605 kubelet[1197]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 10:38:00 addons-681605 kubelet[1197]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 10:38:00 addons-681605 kubelet[1197]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 10:38:01 addons-681605 kubelet[1197]: E1007 10:38:01.401899    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297481401186135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:38:01 addons-681605 kubelet[1197]: E1007 10:38:01.401926    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297481401186135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:38:02 addons-681605 kubelet[1197]: I1007 10:38:02.844429    1197 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 10:38:11 addons-681605 kubelet[1197]: E1007 10:38:11.404936    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297491404465373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:38:11 addons-681605 kubelet[1197]: E1007 10:38:11.405023    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297491404465373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:38:21 addons-681605 kubelet[1197]: E1007 10:38:21.408559    1197 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297501407919313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:38:21 addons-681605 kubelet[1197]: E1007 10:38:21.409185    1197 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728297501407919313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582807,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [1bfcc864fd24ecec1bd47cefbd4862a5c81eea3a774c391eb6f52e1500765100] <==
	I1007 10:23:12.084017       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 10:23:12.118001       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 10:23:12.118081       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 10:23:12.139600       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 10:23:12.139776       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-681605_bbb4f3d7-b106-4fd3-89c1-3d0edb6e4805!
	I1007 10:23:12.140810       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1aa4b118-f719-4134-b3ce-bdfd82029301", APIVersion:"v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-681605_bbb4f3d7-b106-4fd3-89c1-3d0edb6e4805 became leader
	I1007 10:23:12.239902       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-681605_bbb4f3d7-b106-4fd3-89c1-3d0edb6e4805!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-681605 -n addons-681605
helpers_test.go:261: (dbg) Run:  kubectl --context addons-681605 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (340.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-681605
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-681605: exit status 82 (2m0.461704052s)

                                                
                                                
-- stdout --
	* Stopping node "addons-681605"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-681605" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-681605
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-681605: exit status 11 (21.664368548s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.71:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-681605" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-681605
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-681605: exit status 11 (6.144373487s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.71:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-681605" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-681605
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-681605: exit status 11 (6.14278424s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.71:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-681605" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.41s)

                                                
                                    
x
+
TestCertExpiration (1086.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-658191 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-658191 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (52.50751234s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-658191 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p cert-expiration-658191 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: exit status 109 (14m10.857444029s)

                                                
                                                
-- stdout --
	* [cert-expiration-658191] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "cert-expiration-658191" primary control-plane node in "cert-expiration-658191" cluster
	* Updating the running kvm2 "cert-expiration-658191" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Certificate client.crt has expired. Generating a new one...
	! Certificate apiserver.crt.cc9b4a92 has expired. Generating a new one...
	! Certificate proxy-client.crt has expired. Generating a new one...
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.116901ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000243898s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W1007 11:54:45.949654    9874 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W1007 11:54:45.950476    9874 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.006247417s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000430342s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W1007 11:58:49.180041   10657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W1007 11:58:49.180845   10657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.006247417s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000430342s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W1007 11:58:49.180041   10657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W1007 11:58:49.180845   10657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-linux-amd64 start -p cert-expiration-658191 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio" : exit status 109
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-07 12:02:52.643921654 +0000 UTC m=+6074.826707697
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-658191 -n cert-expiration-658191
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-658191 -n cert-expiration-658191: exit status 2 (229.670044ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p cert-expiration-658191 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p cert-expiration-658191 logs -n 25: (1.005310855s)
helpers_test.go:252: TestCertExpiration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-167819 sudo cat                              | bridge-167819          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p bridge-167819 sudo                                  | bridge-167819          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p bridge-167819 sudo                                  | bridge-167819          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-167819 sudo                                  | bridge-167819          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-167819 sudo cat                              | bridge-167819          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-167819 sudo cat                              | bridge-167819          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-167819 sudo                                  | bridge-167819          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-167819 sudo                                  | bridge-167819          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-167819 sudo                                  | bridge-167819          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-167819 sudo find                             | bridge-167819          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-167819 sudo crio                             | bridge-167819          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-167819                                       | bridge-167819          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	| start   | -p embed-certs-475689                                  | embed-certs-475689     | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:54 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-475689            | embed-certs-475689     | jenkins | v1.34.0 | 07 Oct 24 11:54 UTC | 07 Oct 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-475689                                  | embed-certs-475689     | jenkins | v1.34.0 | 07 Oct 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-236151             | no-preload-236151      | jenkins | v1.34.0 | 07 Oct 24 11:54 UTC | 07 Oct 24 11:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-236151                                   | no-preload-236151      | jenkins | v1.34.0 | 07 Oct 24 11:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-610504        | old-k8s-version-610504 | jenkins | v1.34.0 | 07 Oct 24 11:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-475689                 | embed-certs-475689     | jenkins | v1.34.0 | 07 Oct 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-475689                                  | embed-certs-475689     | jenkins | v1.34.0 | 07 Oct 24 11:56 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-236151                  | no-preload-236151      | jenkins | v1.34.0 | 07 Oct 24 11:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-236151                                   | no-preload-236151      | jenkins | v1.34.0 | 07 Oct 24 11:56 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-610504                              | old-k8s-version-610504 | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-610504             | old-k8s-version-610504 | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC | 07 Oct 24 11:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-610504                              | old-k8s-version-610504 | jenkins | v1.34.0 | 07 Oct 24 11:57 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=kvm2                                          |                        |         |         |                     |                     |
	|         | --container-runtime=crio                               |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:57:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:57:58.579973   72038 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:57:58.580139   72038 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:57:58.580149   72038 out.go:358] Setting ErrFile to fd 2...
	I1007 11:57:58.580154   72038 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:57:58.580344   72038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 11:57:58.580884   72038 out.go:352] Setting JSON to false
	I1007 11:57:58.581871   72038 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5973,"bootTime":1728296306,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:57:58.581966   72038 start.go:139] virtualization: kvm guest
	I1007 11:57:58.584275   72038 out.go:177] * [old-k8s-version-610504] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:57:58.585636   72038 notify.go:220] Checking for updates...
	I1007 11:57:58.585655   72038 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 11:57:58.587035   72038 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:57:58.588625   72038 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 11:57:58.589963   72038 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:57:58.591302   72038 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:57:58.593015   72038 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:57:58.595074   72038 config.go:182] Loaded profile config "old-k8s-version-610504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1007 11:57:58.595567   72038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:57:58.595653   72038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:57:58.610407   72038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41299
	I1007 11:57:58.610813   72038 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:57:58.611312   72038 main.go:141] libmachine: Using API Version  1
	I1007 11:57:58.611331   72038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:57:58.611660   72038 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:57:58.611819   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .DriverName
	I1007 11:57:58.613699   72038 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 11:57:58.614923   72038 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:57:58.615233   72038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:57:58.615274   72038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:57:58.629904   72038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33541
	I1007 11:57:58.630396   72038 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:57:58.630842   72038 main.go:141] libmachine: Using API Version  1
	I1007 11:57:58.630859   72038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:57:58.631154   72038 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:57:58.631327   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .DriverName
	I1007 11:57:58.667305   72038 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 11:57:58.668489   72038 start.go:297] selected driver: kvm2
	I1007 11:57:58.668503   72038 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-610504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-610504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:57:58.668640   72038 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:57:58.669728   72038 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:57:58.669818   72038 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 11:57:58.685127   72038 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 11:57:58.685514   72038 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:57:58.685542   72038 cni.go:84] Creating CNI manager for ""
	I1007 11:57:58.685582   72038 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:57:58.685616   72038 start.go:340] cluster config:
	{Name:old-k8s-version-610504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-610504 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:57:58.685716   72038 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:57:58.687676   72038 out.go:177] * Starting "old-k8s-version-610504" primary control-plane node in "old-k8s-version-610504" cluster
	I1007 11:57:54.028136   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:57:57.100243   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:57:58.688849   72038 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 11:57:58.688881   72038 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1007 11:57:58.688890   72038 cache.go:56] Caching tarball of preloaded images
	I1007 11:57:58.688975   72038 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 11:57:58.688984   72038 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1007 11:57:58.689069   72038 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/old-k8s-version-610504/config.json ...
	I1007 11:57:58.689244   72038 start.go:360] acquireMachinesLock for old-k8s-version-610504: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 11:58:03.180239   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:58:06.252261   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:58:12.332249   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:58:15.404287   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:58:21.484257   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:58:24.556203   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:58:30.636260   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:58:33.708280   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:58:39.788259   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:58:42.860271   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:58:47.873677   58399 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I1007 11:58:47.873792   58399 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 11:58:47.875770   58399 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 11:58:47.875814   58399 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 11:58:47.875902   58399 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 11:58:47.876034   58399 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 11:58:47.876148   58399 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 11:58:47.876228   58399 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 11:58:47.877818   58399 out.go:235]   - Generating certificates and keys ...
	I1007 11:58:47.877905   58399 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 11:58:47.877981   58399 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 11:58:47.878077   58399 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 11:58:47.878154   58399 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 11:58:47.878260   58399 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 11:58:47.878314   58399 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 11:58:47.878390   58399 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 11:58:47.878449   58399 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 11:58:47.878542   58399 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 11:58:47.878631   58399 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 11:58:47.878678   58399 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 11:58:47.878752   58399 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 11:58:47.878818   58399 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 11:58:47.878886   58399 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 11:58:47.878934   58399 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 11:58:47.878983   58399 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 11:58:47.879026   58399 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 11:58:47.879091   58399 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 11:58:47.879144   58399 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 11:58:47.880622   58399 out.go:235]   - Booting up control plane ...
	I1007 11:58:47.880734   58399 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 11:58:47.880839   58399 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 11:58:47.880923   58399 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 11:58:47.881066   58399 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 11:58:47.881150   58399 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 11:58:47.881181   58399 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 11:58:47.881311   58399 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 11:58:47.881401   58399 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 11:58:47.881453   58399 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.116901ms
	I1007 11:58:47.881526   58399 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 11:58:47.881577   58399 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000243898s
	I1007 11:58:47.881580   58399 kubeadm.go:310] 
	I1007 11:58:47.881612   58399 kubeadm.go:310] Unfortunately, an error has occurred:
	I1007 11:58:47.881637   58399 kubeadm.go:310] 	context deadline exceeded
	I1007 11:58:47.881639   58399 kubeadm.go:310] 
	I1007 11:58:47.881666   58399 kubeadm.go:310] This error is likely caused by:
	I1007 11:58:47.881691   58399 kubeadm.go:310] 	- The kubelet is not running
	I1007 11:58:47.881774   58399 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 11:58:47.881778   58399 kubeadm.go:310] 
	I1007 11:58:47.881871   58399 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 11:58:47.881897   58399 kubeadm.go:310] 	- 'systemctl status kubelet'
	I1007 11:58:47.881926   58399 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I1007 11:58:47.881929   58399 kubeadm.go:310] 
	I1007 11:58:47.882010   58399 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 11:58:47.882074   58399 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 11:58:47.882146   58399 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1007 11:58:47.882224   58399 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 11:58:47.882284   58399 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I1007 11:58:47.882424   58399 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	W1007 11:58:47.882458   58399 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.116901ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000243898s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W1007 11:54:45.949654    9874 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W1007 11:54:45.950476    9874 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1007 11:58:47.882508   58399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 11:58:49.072043   58399 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.189515966s)
	I1007 11:58:49.072106   58399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:58:49.086892   58399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 11:58:49.097151   58399 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 11:58:49.097160   58399 kubeadm.go:157] found existing configuration files:
	
	I1007 11:58:49.097194   58399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 11:58:49.106709   58399 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 11:58:49.106760   58399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 11:58:49.116405   58399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 11:58:49.125586   58399 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 11:58:49.125638   58399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 11:58:49.134707   58399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 11:58:49.143166   58399 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 11:58:49.143208   58399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 11:58:49.152366   58399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 11:58:49.161284   58399 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 11:58:49.161331   58399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 11:58:49.170305   58399 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 11:58:49.216268   58399 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 11:58:49.216423   58399 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 11:58:49.323670   58399 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 11:58:49.323805   58399 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 11:58:49.324046   58399 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 11:58:49.331269   58399 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 11:58:49.333308   58399 out.go:235]   - Generating certificates and keys ...
	I1007 11:58:49.333411   58399 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 11:58:49.333509   58399 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 11:58:49.333601   58399 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 11:58:49.333653   58399 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 11:58:49.333717   58399 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 11:58:49.333765   58399 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 11:58:49.333815   58399 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 11:58:49.333872   58399 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 11:58:49.333933   58399 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 11:58:49.334000   58399 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 11:58:49.334030   58399 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 11:58:49.334110   58399 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 11:58:49.630492   58399 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 11:58:50.052502   58399 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 11:58:50.178329   58399 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 11:58:50.261066   58399 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 11:58:50.326035   58399 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 11:58:50.326551   58399 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 11:58:50.329065   58399 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 11:58:50.330926   58399 out.go:235]   - Booting up control plane ...
	I1007 11:58:50.331049   58399 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 11:58:50.331272   58399 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 11:58:50.334402   58399 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 11:58:50.356685   58399 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 11:58:50.366766   58399 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 11:58:50.366845   58399 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 11:58:50.508592   58399 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 11:58:50.508740   58399 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 11:58:51.515710   58399 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.006247417s
	I1007 11:58:51.515783   58399 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 11:58:48.940242   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:58:52.012254   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:58:58.092243   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:01.164229   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:07.244246   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:10.316269   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:16.396216   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:19.468320   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:25.548218   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:28.620323   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:34.700240   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:37.772295   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:43.852272   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:46.924302   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:53.004268   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 11:59:56.076246   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:02.156239   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:05.228365   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:11.308257   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:14.380316   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:20.460267   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:23.532256   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:29.612276   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:32.684297   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:38.764224   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:41.836299   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:47.916285   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:50.988347   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:00:57.068277   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:01:00.140262   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:01:06.220251   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:01:09.292223   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:01:15.372262   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:01:18.444202   71483 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.37:22: connect: no route to host
	I1007 12:01:21.448576   71606 start.go:364] duration metric: took 4m26.443520269s to acquireMachinesLock for "no-preload-236151"
	I1007 12:01:21.448631   71606 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:01:21.448641   71606 fix.go:54] fixHost starting: 
	I1007 12:01:21.449122   71606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:01:21.449173   71606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:01:21.465405   71606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36857
	I1007 12:01:21.465989   71606 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:01:21.466550   71606 main.go:141] libmachine: Using API Version  1
	I1007 12:01:21.466576   71606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:01:21.466948   71606 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:01:21.467316   71606 main.go:141] libmachine: (no-preload-236151) Calling .DriverName
	I1007 12:01:21.467476   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetState
	I1007 12:01:21.469054   71606 fix.go:112] recreateIfNeeded on no-preload-236151: state=Stopped err=<nil>
	I1007 12:01:21.469081   71606 main.go:141] libmachine: (no-preload-236151) Calling .DriverName
	W1007 12:01:21.469265   71606 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:01:21.471371   71606 out.go:177] * Restarting existing kvm2 VM for "no-preload-236151" ...
	I1007 12:01:21.446099   71483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:01:21.446145   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetMachineName
	I1007 12:01:21.446477   71483 buildroot.go:166] provisioning hostname "embed-certs-475689"
	I1007 12:01:21.446504   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetMachineName
	I1007 12:01:21.446685   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:01:21.448422   71483 machine.go:96] duration metric: took 4m37.427470752s to provisionDockerMachine
	I1007 12:01:21.448465   71483 fix.go:56] duration metric: took 4m37.448283035s for fixHost
	I1007 12:01:21.448477   71483 start.go:83] releasing machines lock for "embed-certs-475689", held for 4m37.448314622s
	W1007 12:01:21.448500   71483 start.go:714] error starting host: provision: host is not running
	W1007 12:01:21.448588   71483 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1007 12:01:21.448599   71483 start.go:729] Will try again in 5 seconds ...
	I1007 12:01:21.472584   71606 main.go:141] libmachine: (no-preload-236151) Calling .Start
	I1007 12:01:21.472749   71606 main.go:141] libmachine: (no-preload-236151) Ensuring networks are active...
	I1007 12:01:21.473550   71606 main.go:141] libmachine: (no-preload-236151) Ensuring network default is active
	I1007 12:01:21.473899   71606 main.go:141] libmachine: (no-preload-236151) Ensuring network mk-no-preload-236151 is active
	I1007 12:01:21.474399   71606 main.go:141] libmachine: (no-preload-236151) Getting domain xml...
	I1007 12:01:21.475032   71606 main.go:141] libmachine: (no-preload-236151) Creating domain...
	I1007 12:01:22.709268   71606 main.go:141] libmachine: (no-preload-236151) Waiting to get IP...
	I1007 12:01:22.710121   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:22.710610   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:22.710673   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:22.710591   72763 retry.go:31] will retry after 289.702552ms: waiting for machine to come up
	I1007 12:01:23.002111   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:23.002645   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:23.002673   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:23.002600   72763 retry.go:31] will retry after 359.231426ms: waiting for machine to come up
	I1007 12:01:23.363124   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:23.363632   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:23.363665   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:23.363554   72763 retry.go:31] will retry after 382.639379ms: waiting for machine to come up
	I1007 12:01:23.748039   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:23.748510   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:23.748539   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:23.748458   72763 retry.go:31] will retry after 418.060427ms: waiting for machine to come up
	I1007 12:01:24.167949   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:24.168543   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:24.168586   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:24.168503   72763 retry.go:31] will retry after 625.182574ms: waiting for machine to come up
	I1007 12:01:24.795228   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:24.795727   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:24.795756   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:24.795676   72763 retry.go:31] will retry after 605.031673ms: waiting for machine to come up
	I1007 12:01:26.448784   71483 start.go:360] acquireMachinesLock for embed-certs-475689: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:01:25.402545   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:25.402933   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:25.402962   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:25.402922   72763 retry.go:31] will retry after 724.129138ms: waiting for machine to come up
	I1007 12:01:26.128806   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:26.129286   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:26.129320   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:26.129231   72763 retry.go:31] will retry after 1.407561052s: waiting for machine to come up
	I1007 12:01:27.538676   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:27.539188   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:27.539237   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:27.539062   72763 retry.go:31] will retry after 1.48626575s: waiting for machine to come up
	I1007 12:01:29.026794   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:29.027313   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:29.027340   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:29.027275   72763 retry.go:31] will retry after 2.190316898s: waiting for machine to come up
	I1007 12:01:31.219945   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:31.220422   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:31.220453   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:31.220377   72763 retry.go:31] will retry after 2.82926058s: waiting for machine to come up
	I1007 12:01:34.050851   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:34.051323   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:34.051354   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:34.051272   72763 retry.go:31] will retry after 2.308402034s: waiting for machine to come up
	I1007 12:01:36.360828   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:36.361330   71606 main.go:141] libmachine: (no-preload-236151) DBG | unable to find current IP address of domain no-preload-236151 in network mk-no-preload-236151
	I1007 12:01:36.361355   71606 main.go:141] libmachine: (no-preload-236151) DBG | I1007 12:01:36.361274   72763 retry.go:31] will retry after 3.841230739s: waiting for machine to come up
	I1007 12:01:41.425138   72038 start.go:364] duration metric: took 3m42.735852077s to acquireMachinesLock for "old-k8s-version-610504"
	I1007 12:01:41.425193   72038 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:01:41.425216   72038 fix.go:54] fixHost starting: 
	I1007 12:01:41.425608   72038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:01:41.425664   72038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:01:41.442812   72038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36145
	I1007 12:01:41.443344   72038 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:01:41.443998   72038 main.go:141] libmachine: Using API Version  1
	I1007 12:01:41.444024   72038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:01:41.444364   72038 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:01:41.444557   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .DriverName
	I1007 12:01:41.444668   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetState
	I1007 12:01:41.446255   72038 fix.go:112] recreateIfNeeded on old-k8s-version-610504: state=Stopped err=<nil>
	I1007 12:01:41.446294   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .DriverName
	W1007 12:01:41.446446   72038 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:01:41.448650   72038 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-610504" ...
	I1007 12:01:40.206590   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.207066   71606 main.go:141] libmachine: (no-preload-236151) Found IP for machine: 192.168.72.252
	I1007 12:01:40.207094   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has current primary IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.207103   71606 main.go:141] libmachine: (no-preload-236151) Reserving static IP address...
	I1007 12:01:40.207453   71606 main.go:141] libmachine: (no-preload-236151) Reserved static IP address: 192.168.72.252
	I1007 12:01:40.207488   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "no-preload-236151", mac: "52:54:00:c8:a3:5b", ip: "192.168.72.252"} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:40.207499   71606 main.go:141] libmachine: (no-preload-236151) Waiting for SSH to be available...
	I1007 12:01:40.207539   71606 main.go:141] libmachine: (no-preload-236151) DBG | skip adding static IP to network mk-no-preload-236151 - found existing host DHCP lease matching {name: "no-preload-236151", mac: "52:54:00:c8:a3:5b", ip: "192.168.72.252"}
	I1007 12:01:40.207560   71606 main.go:141] libmachine: (no-preload-236151) DBG | Getting to WaitForSSH function...
	I1007 12:01:40.209911   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.210211   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:40.210239   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.210348   71606 main.go:141] libmachine: (no-preload-236151) DBG | Using SSH client type: external
	I1007 12:01:40.210374   71606 main.go:141] libmachine: (no-preload-236151) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/no-preload-236151/id_rsa (-rw-------)
	I1007 12:01:40.210433   71606 main.go:141] libmachine: (no-preload-236151) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.252 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/no-preload-236151/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:01:40.210459   71606 main.go:141] libmachine: (no-preload-236151) DBG | About to run SSH command:
	I1007 12:01:40.210475   71606 main.go:141] libmachine: (no-preload-236151) DBG | exit 0
	I1007 12:01:40.336388   71606 main.go:141] libmachine: (no-preload-236151) DBG | SSH cmd err, output: <nil>: 
	I1007 12:01:40.336768   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetConfigRaw
	I1007 12:01:40.337511   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetIP
	I1007 12:01:40.340430   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.340785   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:40.340819   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.341081   71606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/no-preload-236151/config.json ...
	I1007 12:01:40.341276   71606 machine.go:93] provisionDockerMachine start ...
	I1007 12:01:40.341298   71606 main.go:141] libmachine: (no-preload-236151) Calling .DriverName
	I1007 12:01:40.341558   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHHostname
	I1007 12:01:40.343778   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.344083   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:40.344123   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.344280   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHPort
	I1007 12:01:40.344452   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:40.344592   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:40.344776   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHUsername
	I1007 12:01:40.344940   71606 main.go:141] libmachine: Using SSH client type: native
	I1007 12:01:40.345170   71606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1007 12:01:40.345182   71606 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:01:40.448307   71606 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 12:01:40.448334   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetMachineName
	I1007 12:01:40.448561   71606 buildroot.go:166] provisioning hostname "no-preload-236151"
	I1007 12:01:40.448586   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetMachineName
	I1007 12:01:40.448740   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHHostname
	I1007 12:01:40.451352   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.451640   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:40.451667   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.451821   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHPort
	I1007 12:01:40.452016   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:40.452208   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:40.452349   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHUsername
	I1007 12:01:40.452610   71606 main.go:141] libmachine: Using SSH client type: native
	I1007 12:01:40.452765   71606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1007 12:01:40.452776   71606 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-236151 && echo "no-preload-236151" | sudo tee /etc/hostname
	I1007 12:01:40.575429   71606 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-236151
	
	I1007 12:01:40.575456   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHHostname
	I1007 12:01:40.577914   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.578213   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:40.578255   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.578410   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHPort
	I1007 12:01:40.578621   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:40.578777   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:40.578899   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHUsername
	I1007 12:01:40.579028   71606 main.go:141] libmachine: Using SSH client type: native
	I1007 12:01:40.579225   71606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1007 12:01:40.579242   71606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-236151' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-236151/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-236151' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:01:40.692981   71606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:01:40.693016   71606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 12:01:40.693033   71606 buildroot.go:174] setting up certificates
	I1007 12:01:40.693043   71606 provision.go:84] configureAuth start
	I1007 12:01:40.693051   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetMachineName
	I1007 12:01:40.693347   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetIP
	I1007 12:01:40.695905   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.696196   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:40.696214   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.696348   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHHostname
	I1007 12:01:40.698340   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.698602   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:40.698639   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.698729   71606 provision.go:143] copyHostCerts
	I1007 12:01:40.698792   71606 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 12:01:40.698813   71606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 12:01:40.698891   71606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 12:01:40.699025   71606 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 12:01:40.699039   71606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 12:01:40.699080   71606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 12:01:40.699173   71606 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 12:01:40.699184   71606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 12:01:40.699218   71606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 12:01:40.699286   71606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.no-preload-236151 san=[127.0.0.1 192.168.72.252 localhost minikube no-preload-236151]
	I1007 12:01:40.776060   71606 provision.go:177] copyRemoteCerts
	I1007 12:01:40.776114   71606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:01:40.776151   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHHostname
	I1007 12:01:40.778499   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.778783   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:40.778805   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.778961   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHPort
	I1007 12:01:40.779138   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:40.779274   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHUsername
	I1007 12:01:40.779395   71606 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/no-preload-236151/id_rsa Username:docker}
	I1007 12:01:40.862969   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:01:40.891512   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1007 12:01:40.919278   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:01:40.947564   71606 provision.go:87] duration metric: took 254.501128ms to configureAuth
	I1007 12:01:40.947593   71606 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:01:40.947784   71606 config.go:182] Loaded profile config "no-preload-236151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:01:40.947872   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHHostname
	I1007 12:01:40.950631   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.950915   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:40.950941   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:40.951135   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHPort
	I1007 12:01:40.951332   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:40.951497   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:40.951630   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHUsername
	I1007 12:01:40.951775   71606 main.go:141] libmachine: Using SSH client type: native
	I1007 12:01:40.951961   71606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1007 12:01:40.951996   71606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:01:41.178979   71606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:01:41.179003   71606 machine.go:96] duration metric: took 837.713266ms to provisionDockerMachine
	I1007 12:01:41.179013   71606 start.go:293] postStartSetup for "no-preload-236151" (driver="kvm2")
	I1007 12:01:41.179022   71606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:01:41.179037   71606 main.go:141] libmachine: (no-preload-236151) Calling .DriverName
	I1007 12:01:41.179356   71606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:01:41.179381   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHHostname
	I1007 12:01:41.182057   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:41.182372   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:41.182417   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:41.182630   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHPort
	I1007 12:01:41.182820   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:41.182992   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHUsername
	I1007 12:01:41.183121   71606 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/no-preload-236151/id_rsa Username:docker}
	I1007 12:01:41.270909   71606 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:01:41.275508   71606 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:01:41.275543   71606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 12:01:41.275645   71606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 12:01:41.275748   71606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 12:01:41.275832   71606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:01:41.285693   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 12:01:41.310879   71606 start.go:296] duration metric: took 131.852455ms for postStartSetup
	I1007 12:01:41.310926   71606 fix.go:56] duration metric: took 19.862284096s for fixHost
	I1007 12:01:41.310950   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHHostname
	I1007 12:01:41.313928   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:41.314236   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:41.314279   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:41.314451   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHPort
	I1007 12:01:41.314634   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:41.314756   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:41.314878   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHUsername
	I1007 12:01:41.314994   71606 main.go:141] libmachine: Using SSH client type: native
	I1007 12:01:41.315179   71606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1007 12:01:41.315193   71606 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:01:41.424963   71606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302501.382402771
	
	I1007 12:01:41.424982   71606 fix.go:216] guest clock: 1728302501.382402771
	I1007 12:01:41.424989   71606 fix.go:229] Guest: 2024-10-07 12:01:41.382402771 +0000 UTC Remote: 2024-10-07 12:01:41.310930685 +0000 UTC m=+286.443304570 (delta=71.472086ms)
	I1007 12:01:41.425033   71606 fix.go:200] guest clock delta is within tolerance: 71.472086ms
	I1007 12:01:41.425039   71606 start.go:83] releasing machines lock for "no-preload-236151", held for 19.976424657s
	I1007 12:01:41.425068   71606 main.go:141] libmachine: (no-preload-236151) Calling .DriverName
	I1007 12:01:41.425336   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetIP
	I1007 12:01:41.428056   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:41.428374   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:41.428406   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:41.428570   71606 main.go:141] libmachine: (no-preload-236151) Calling .DriverName
	I1007 12:01:41.429064   71606 main.go:141] libmachine: (no-preload-236151) Calling .DriverName
	I1007 12:01:41.429227   71606 main.go:141] libmachine: (no-preload-236151) Calling .DriverName
	I1007 12:01:41.429319   71606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:01:41.429359   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHHostname
	I1007 12:01:41.429416   71606 ssh_runner.go:195] Run: cat /version.json
	I1007 12:01:41.429441   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHHostname
	I1007 12:01:41.432076   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:41.432186   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:41.432468   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:41.432502   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:41.432543   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:41.432563   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:41.432673   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHPort
	I1007 12:01:41.432759   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHPort
	I1007 12:01:41.432842   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:41.432927   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHKeyPath
	I1007 12:01:41.432990   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHUsername
	I1007 12:01:41.433045   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetSSHUsername
	I1007 12:01:41.433095   71606 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/no-preload-236151/id_rsa Username:docker}
	I1007 12:01:41.433124   71606 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/no-preload-236151/id_rsa Username:docker}
	I1007 12:01:41.536167   71606 ssh_runner.go:195] Run: systemctl --version
	I1007 12:01:41.544017   71606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:01:41.698049   71606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:01:41.706116   71606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:01:41.706195   71606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:01:41.724359   71606 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:01:41.724387   71606 start.go:495] detecting cgroup driver to use...
	I1007 12:01:41.724460   71606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:01:41.745675   71606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:01:41.763319   71606 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:01:41.763391   71606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:01:41.778662   71606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:01:41.793785   71606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:01:41.920354   71606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:01:42.081013   71606 docker.go:233] disabling docker service ...
	I1007 12:01:42.081088   71606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:01:42.096226   71606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:01:42.110367   71606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:01:42.244287   71606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:01:42.370749   71606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:01:42.385581   71606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:01:42.405590   71606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:01:42.405675   71606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:01:42.417908   71606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:01:42.417997   71606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:01:42.429622   71606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:01:42.441716   71606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:01:42.454302   71606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:01:42.471132   71606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:01:42.488130   71606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:01:42.508494   71606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:01:42.523131   71606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:01:42.534242   71606 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:01:42.534294   71606 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:01:42.557314   71606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:01:42.571901   71606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:01:42.723347   71606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:01:42.835593   71606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:01:42.835653   71606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:01:42.840729   71606 start.go:563] Will wait 60s for crictl version
	I1007 12:01:42.840796   71606 ssh_runner.go:195] Run: which crictl
	I1007 12:01:42.844616   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:01:42.885395   71606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:01:42.885458   71606 ssh_runner.go:195] Run: crio --version
	I1007 12:01:42.915455   71606 ssh_runner.go:195] Run: crio --version
	I1007 12:01:42.949029   71606 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:01:41.450127   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .Start
	I1007 12:01:41.450330   72038 main.go:141] libmachine: (old-k8s-version-610504) Ensuring networks are active...
	I1007 12:01:41.451164   72038 main.go:141] libmachine: (old-k8s-version-610504) Ensuring network default is active
	I1007 12:01:41.451528   72038 main.go:141] libmachine: (old-k8s-version-610504) Ensuring network mk-old-k8s-version-610504 is active
	I1007 12:01:41.451932   72038 main.go:141] libmachine: (old-k8s-version-610504) Getting domain xml...
	I1007 12:01:41.452738   72038 main.go:141] libmachine: (old-k8s-version-610504) Creating domain...
	I1007 12:01:42.787139   72038 main.go:141] libmachine: (old-k8s-version-610504) Waiting to get IP...
	I1007 12:01:42.788020   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:42.788560   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:42.788642   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:42.788558   72913 retry.go:31] will retry after 209.525967ms: waiting for machine to come up
	I1007 12:01:43.000119   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:43.000644   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:43.000672   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:43.000588   72913 retry.go:31] will retry after 353.772985ms: waiting for machine to come up
	I1007 12:01:43.356382   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:43.356993   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:43.357019   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:43.356935   72913 retry.go:31] will retry after 405.937371ms: waiting for machine to come up
	I1007 12:01:42.950566   71606 main.go:141] libmachine: (no-preload-236151) Calling .GetIP
	I1007 12:01:42.953937   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:42.954395   71606 main.go:141] libmachine: (no-preload-236151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:a3:5b", ip: ""} in network mk-no-preload-236151: {Iface:virbr3 ExpiryTime:2024-10-07 13:01:32 +0000 UTC Type:0 Mac:52:54:00:c8:a3:5b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:no-preload-236151 Clientid:01:52:54:00:c8:a3:5b}
	I1007 12:01:42.954426   71606 main.go:141] libmachine: (no-preload-236151) DBG | domain no-preload-236151 has defined IP address 192.168.72.252 and MAC address 52:54:00:c8:a3:5b in network mk-no-preload-236151
	I1007 12:01:42.954654   71606 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1007 12:01:42.959132   71606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:01:42.972887   71606 kubeadm.go:883] updating cluster {Name:no-preload-236151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:no-preload-236151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.252 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:01:42.972998   71606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:01:42.973040   71606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:01:43.015644   71606 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:01:43.015670   71606 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 12:01:43.015752   71606 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 12:01:43.015779   71606 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1007 12:01:43.015797   71606 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1007 12:01:43.015788   71606 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1007 12:01:43.015766   71606 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1007 12:01:43.015825   71606 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1007 12:01:43.015881   71606 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1007 12:01:43.015737   71606 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:01:43.017917   71606 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1007 12:01:43.017931   71606 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1007 12:01:43.017923   71606 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1007 12:01:43.017947   71606 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1007 12:01:43.017947   71606 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:01:43.017928   71606 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 12:01:43.017956   71606 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1007 12:01:43.017995   71606 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1007 12:01:43.219664   71606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1007 12:01:43.219702   71606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1007 12:01:43.233949   71606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1007 12:01:43.271383   71606 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1007 12:01:43.271431   71606 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1007 12:01:43.271508   71606 ssh_runner.go:195] Run: which crictl
	I1007 12:01:43.355416   71606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 12:01:43.356122   71606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1007 12:01:43.361866   71606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1007 12:01:43.385779   71606 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1007 12:01:43.385826   71606 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1007 12:01:43.385867   71606 ssh_runner.go:195] Run: which crictl
	I1007 12:01:43.385868   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1007 12:01:43.412495   71606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1007 12:01:43.438426   71606 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1007 12:01:43.438474   71606 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 12:01:43.438530   71606 ssh_runner.go:195] Run: which crictl
	I1007 12:01:43.474193   71606 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1007 12:01:43.474240   71606 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1007 12:01:43.474246   71606 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1007 12:01:43.474280   71606 ssh_runner.go:195] Run: which crictl
	I1007 12:01:43.474275   71606 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1007 12:01:43.474280   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1007 12:01:43.474337   71606 ssh_runner.go:195] Run: which crictl
	I1007 12:01:43.498649   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1007 12:01:43.518406   71606 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1007 12:01:43.518470   71606 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1007 12:01:43.518489   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1007 12:01:43.518499   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1007 12:01:43.518508   71606 ssh_runner.go:195] Run: which crictl
	I1007 12:01:43.518415   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 12:01:43.552583   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1007 12:01:43.583216   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1007 12:01:43.621558   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1007 12:01:43.647456   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1007 12:01:43.647506   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 12:01:43.647542   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1007 12:01:43.683223   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1007 12:01:43.700216   71606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1007 12:01:43.700338   71606 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1007 12:01:43.756304   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1007 12:01:43.775643   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1007 12:01:43.775727   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1007 12:01:43.787221   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 12:01:43.841384   71606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1007 12:01:43.841435   71606 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1007 12:01:43.841462   71606 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1007 12:01:43.841515   71606 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1007 12:01:43.841520   71606 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1007 12:01:43.855261   71606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1007 12:01:43.855367   71606 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1007 12:01:43.905538   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1007 12:01:43.905564   71606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1007 12:01:43.905601   71606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1007 12:01:43.905657   71606 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1007 12:01:43.905667   71606 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1007 12:01:43.905668   71606 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1007 12:01:44.238305   71606 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:01:43.764713   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:43.765325   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:43.765373   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:43.765275   72913 retry.go:31] will retry after 522.104454ms: waiting for machine to come up
	I1007 12:01:44.289092   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:44.289659   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:44.289689   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:44.289624   72913 retry.go:31] will retry after 569.282945ms: waiting for machine to come up
	I1007 12:01:44.860469   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:44.860917   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:44.860946   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:44.860869   72913 retry.go:31] will retry after 941.494999ms: waiting for machine to come up
	I1007 12:01:45.804192   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:45.804732   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:45.804756   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:45.804698   72913 retry.go:31] will retry after 1.135275134s: waiting for machine to come up
	I1007 12:01:46.941255   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:46.941732   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:46.941761   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:46.941688   72913 retry.go:31] will retry after 1.189641519s: waiting for machine to come up
	I1007 12:01:48.132997   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:48.133445   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:48.133475   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:48.133395   72913 retry.go:31] will retry after 1.496848746s: waiting for machine to come up
	I1007 12:01:45.957509   71606 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.102115919s)
	I1007 12:01:45.957559   71606 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1007 12:01:45.957560   71606 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.11593638s)
	I1007 12:01:45.957575   71606 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1007 12:01:45.957600   71606 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1007 12:01:45.957642   71606 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1: (2.052069132s)
	I1007 12:01:45.957656   71606 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1007 12:01:45.957683   71606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1007 12:01:45.957706   71606 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: (2.052017028s)
	I1007 12:01:45.957730   71606 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1007 12:01:45.957786   71606 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1007 12:01:45.957789   71606 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.052058915s)
	I1007 12:01:45.957861   71606 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1007 12:01:45.957840   71606 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.719501884s)
	I1007 12:01:45.957899   71606 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1007 12:01:45.957926   71606 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:01:45.957958   71606 ssh_runner.go:195] Run: which crictl
	I1007 12:01:45.962678   71606 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1007 12:01:47.925695   71606 ssh_runner.go:235] Completed: which crictl: (1.967714335s)
	I1007 12:01:47.925768   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:01:47.925778   71606 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.968101567s)
	I1007 12:01:47.925794   71606 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1007 12:01:47.925826   71606 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1007 12:01:47.925901   71606 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1007 12:01:47.964043   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:01:49.631619   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:49.632110   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:49.632136   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:49.632067   72913 retry.go:31] will retry after 2.138984215s: waiting for machine to come up
	I1007 12:01:51.772879   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:51.773490   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:51.773525   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:51.773431   72913 retry.go:31] will retry after 2.078073388s: waiting for machine to come up
	I1007 12:01:51.902950   71606 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.977019452s)
	I1007 12:01:51.902981   71606 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1007 12:01:51.903009   71606 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1007 12:01:51.903019   71606 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.938943839s)
	I1007 12:01:51.903061   71606 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1007 12:01:51.903077   71606 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:01:53.885323   71606 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.982237214s)
	I1007 12:01:53.885359   71606 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1007 12:01:53.885360   71606 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.982265848s)
	I1007 12:01:53.885389   71606 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1007 12:01:53.885401   71606 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1007 12:01:53.885456   71606 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1007 12:01:53.885511   71606 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1007 12:01:53.853635   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:53.854356   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:53.854389   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:53.854295   72913 retry.go:31] will retry after 2.92801576s: waiting for machine to come up
	I1007 12:01:56.783492   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:01:56.784046   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | unable to find current IP address of domain old-k8s-version-610504 in network mk-old-k8s-version-610504
	I1007 12:01:56.784066   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | I1007 12:01:56.784017   72913 retry.go:31] will retry after 3.422180887s: waiting for machine to come up
	I1007 12:01:55.752398   71606 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.866913365s)
	I1007 12:01:55.752437   71606 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1007 12:01:55.752441   71606 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.866912889s)
	I1007 12:01:55.752460   71606 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1007 12:01:55.752467   71606 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1007 12:01:55.752528   71606 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1007 12:01:57.303625   71606 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.551066149s)
	I1007 12:01:57.303654   71606 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1007 12:01:57.303682   71606 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1007 12:01:57.303746   71606 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1007 12:01:58.253428   71606 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1007 12:01:58.253480   71606 cache_images.go:123] Successfully loaded all cached images
	I1007 12:01:58.253487   71606 cache_images.go:92] duration metric: took 15.237802088s to LoadCachedImages
	I1007 12:01:58.253507   71606 kubeadm.go:934] updating node { 192.168.72.252 8443 v1.31.1 crio true true} ...
	I1007 12:01:58.253637   71606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-236151 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.252
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-236151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:01:58.253830   71606 ssh_runner.go:195] Run: crio config
	I1007 12:01:58.304728   71606 cni.go:84] Creating CNI manager for ""
	I1007 12:01:58.304755   71606 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:01:58.304774   71606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:01:58.304799   71606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.252 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-236151 NodeName:no-preload-236151 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.252"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.252 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:01:58.304978   71606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.252
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-236151"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.252
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.252"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:01:58.305040   71606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:01:58.316516   71606 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:01:58.316581   71606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 12:01:58.326688   71606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1007 12:01:58.344591   71606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:01:58.362699   71606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1007 12:01:58.381597   71606 ssh_runner.go:195] Run: grep 192.168.72.252	control-plane.minikube.internal$ /etc/hosts
	I1007 12:01:58.385558   71606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.252	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:01:58.398642   71606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:01:58.533602   71606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:01:58.560085   71606 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/no-preload-236151 for IP: 192.168.72.252
	I1007 12:01:58.560110   71606 certs.go:194] generating shared ca certs ...
	I1007 12:01:58.560125   71606 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:01:58.560283   71606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 12:01:58.560320   71606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 12:01:58.560332   71606 certs.go:256] generating profile certs ...
	I1007 12:01:58.560415   71606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/no-preload-236151/client.key
	I1007 12:01:58.560493   71606 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/no-preload-236151/apiserver.key.c2cb8310
	I1007 12:01:58.560534   71606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/no-preload-236151/proxy-client.key
	I1007 12:01:58.560658   71606 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 12:01:58.560707   71606 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 12:01:58.560717   71606 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 12:01:58.560746   71606 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:01:58.560769   71606 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:01:58.560808   71606 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 12:01:58.560846   71606 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 12:01:58.561439   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:01:58.613266   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:01:58.650576   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:01:58.680674   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:01:58.716973   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/no-preload-236151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 12:01:58.748627   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/no-preload-236151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:01:58.777488   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/no-preload-236151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:01:58.803295   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/no-preload-236151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:01:58.828533   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 12:01:58.853338   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 12:01:58.877138   71606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:01:58.901970   71606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:01:58.920497   71606 ssh_runner.go:195] Run: openssl version
	I1007 12:01:58.927530   71606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 12:01:58.939075   71606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 12:01:58.943732   71606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 12:01:58.943796   71606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 12:01:58.949961   71606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 12:01:58.961837   71606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 12:01:58.974529   71606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 12:01:58.979421   71606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 12:01:58.979480   71606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 12:01:58.985905   71606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:01:58.999184   71606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:01:59.012816   71606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:01:59.017994   71606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:01:59.018069   71606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:01:59.024446   71606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:01:59.037072   71606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:01:59.041890   71606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:01:59.048258   71606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:01:59.054853   71606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:01:59.061229   71606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:01:59.067383   71606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:01:59.073787   71606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:01:59.080320   71606 kubeadm.go:392] StartCluster: {Name:no-preload-236151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:no-preload-236151 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.252 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:01:59.080413   71606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:01:59.080470   71606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:01:59.120543   71606 cri.go:89] found id: ""
	I1007 12:01:59.120644   71606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:01:59.131504   71606 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 12:01:59.131530   71606 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 12:01:59.131574   71606 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 12:01:59.143675   71606 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 12:01:59.144711   71606 kubeconfig.go:125] found "no-preload-236151" server: "https://192.168.72.252:8443"
	I1007 12:01:59.146700   71606 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 12:01:59.158617   71606 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.252
	I1007 12:01:59.158653   71606 kubeadm.go:1160] stopping kube-system containers ...
	I1007 12:01:59.158664   71606 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1007 12:01:59.158706   71606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:01:59.204574   71606 cri.go:89] found id: ""
	I1007 12:01:59.204657   71606 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 12:01:59.225663   71606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:01:59.238216   71606 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:01:59.238234   71606 kubeadm.go:157] found existing configuration files:
	
	I1007 12:01:59.238282   71606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:01:59.249736   71606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:01:59.249791   71606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:01:59.260629   71606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:01:59.270635   71606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:01:59.270707   71606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:01:59.280698   71606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:01:59.290229   71606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:01:59.290289   71606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:01:59.300584   71606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:01:59.310533   71606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:01:59.310597   71606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:01:59.321315   71606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:01:59.332699   71606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:01:59.461649   71606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:01.722448   71483 start.go:364] duration metric: took 35.273591268s to acquireMachinesLock for "embed-certs-475689"
	I1007 12:02:01.722499   71483 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:02:01.722512   71483 fix.go:54] fixHost starting: 
	I1007 12:02:01.722920   71483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:02:01.722963   71483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:02:01.744207   71483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I1007 12:02:01.744730   71483 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:02:01.745316   71483 main.go:141] libmachine: Using API Version  1
	I1007 12:02:01.745336   71483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:02:01.745714   71483 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:02:01.745870   71483 main.go:141] libmachine: (embed-certs-475689) Calling .DriverName
	I1007 12:02:01.745996   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetState
	I1007 12:02:01.748033   71483 fix.go:112] recreateIfNeeded on embed-certs-475689: state=Stopped err=<nil>
	I1007 12:02:01.748058   71483 main.go:141] libmachine: (embed-certs-475689) Calling .DriverName
	W1007 12:02:01.748199   71483 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:02:01.750099   71483 out.go:177] * Restarting existing kvm2 VM for "embed-certs-475689" ...
	I1007 12:02:00.209120   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.209714   72038 main.go:141] libmachine: (old-k8s-version-610504) Found IP for machine: 192.168.39.75
	I1007 12:02:00.209742   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has current primary IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.209752   72038 main.go:141] libmachine: (old-k8s-version-610504) Reserving static IP address...
	I1007 12:02:00.210173   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "old-k8s-version-610504", mac: "52:54:00:0e:a9:46", ip: "192.168.39.75"} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:00.210213   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | skip adding static IP to network mk-old-k8s-version-610504 - found existing host DHCP lease matching {name: "old-k8s-version-610504", mac: "52:54:00:0e:a9:46", ip: "192.168.39.75"}
	I1007 12:02:00.210226   72038 main.go:141] libmachine: (old-k8s-version-610504) Reserved static IP address: 192.168.39.75
	I1007 12:02:00.210258   72038 main.go:141] libmachine: (old-k8s-version-610504) Waiting for SSH to be available...
	I1007 12:02:00.210274   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | Getting to WaitForSSH function...
	I1007 12:02:00.212315   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.212666   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:00.212726   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.212837   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | Using SSH client type: external
	I1007 12:02:00.212864   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/old-k8s-version-610504/id_rsa (-rw-------)
	I1007 12:02:00.212901   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/old-k8s-version-610504/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:02:00.212916   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | About to run SSH command:
	I1007 12:02:00.212990   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | exit 0
	I1007 12:02:00.340150   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | SSH cmd err, output: <nil>: 
	I1007 12:02:00.340579   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetConfigRaw
	I1007 12:02:00.341336   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetIP
	I1007 12:02:00.343915   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.344290   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:00.344337   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.344582   72038 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/old-k8s-version-610504/config.json ...
	I1007 12:02:00.344777   72038 machine.go:93] provisionDockerMachine start ...
	I1007 12:02:00.344794   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .DriverName
	I1007 12:02:00.345042   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHHostname
	I1007 12:02:00.347267   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.347582   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:00.347610   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.347839   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHPort
	I1007 12:02:00.348077   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:00.348236   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:00.348381   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHUsername
	I1007 12:02:00.348538   72038 main.go:141] libmachine: Using SSH client type: native
	I1007 12:02:00.348723   72038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I1007 12:02:00.348732   72038 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:02:00.464666   72038 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 12:02:00.464698   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetMachineName
	I1007 12:02:00.464977   72038 buildroot.go:166] provisioning hostname "old-k8s-version-610504"
	I1007 12:02:00.465025   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetMachineName
	I1007 12:02:00.465230   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHHostname
	I1007 12:02:00.468074   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.468598   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:00.468631   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.468754   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHPort
	I1007 12:02:00.468931   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:00.469131   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:00.469317   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHUsername
	I1007 12:02:00.469485   72038 main.go:141] libmachine: Using SSH client type: native
	I1007 12:02:00.469730   72038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I1007 12:02:00.469750   72038 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-610504 && echo "old-k8s-version-610504" | sudo tee /etc/hostname
	I1007 12:02:00.598577   72038 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-610504
	
	I1007 12:02:00.598608   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHHostname
	I1007 12:02:00.601918   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.602362   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:00.602407   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.602644   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHPort
	I1007 12:02:00.602811   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:00.602930   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:00.603057   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHUsername
	I1007 12:02:00.603182   72038 main.go:141] libmachine: Using SSH client type: native
	I1007 12:02:00.603385   72038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I1007 12:02:00.603411   72038 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-610504' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-610504/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-610504' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:02:00.725203   72038 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:02:00.725231   72038 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 12:02:00.725274   72038 buildroot.go:174] setting up certificates
	I1007 12:02:00.725287   72038 provision.go:84] configureAuth start
	I1007 12:02:00.725300   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetMachineName
	I1007 12:02:00.725604   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetIP
	I1007 12:02:00.728323   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.728751   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:00.728785   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.728964   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHHostname
	I1007 12:02:00.731497   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.731880   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:00.731910   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:00.732061   72038 provision.go:143] copyHostCerts
	I1007 12:02:00.732127   72038 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 12:02:00.732148   72038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 12:02:00.732214   72038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 12:02:00.732342   72038 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 12:02:00.732355   72038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 12:02:00.732385   72038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 12:02:00.732462   72038 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 12:02:00.732478   72038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 12:02:00.732499   72038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 12:02:00.732592   72038 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-610504 san=[127.0.0.1 192.168.39.75 localhost minikube old-k8s-version-610504]
	I1007 12:02:01.043124   72038 provision.go:177] copyRemoteCerts
	I1007 12:02:01.043181   72038 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:02:01.043207   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHHostname
	I1007 12:02:01.046177   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.046589   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:01.046638   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.046859   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHPort
	I1007 12:02:01.047074   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:01.047225   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHUsername
	I1007 12:02:01.047337   72038 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/old-k8s-version-610504/id_rsa Username:docker}
	I1007 12:02:01.131330   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:02:01.156039   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1007 12:02:01.181083   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:02:01.207057   72038 provision.go:87] duration metric: took 481.758055ms to configureAuth
	I1007 12:02:01.207085   72038 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:02:01.207273   72038 config.go:182] Loaded profile config "old-k8s-version-610504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1007 12:02:01.207345   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHHostname
	I1007 12:02:01.209913   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.210252   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:01.210284   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.210445   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHPort
	I1007 12:02:01.210642   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:01.210802   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:01.210896   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHUsername
	I1007 12:02:01.211044   72038 main.go:141] libmachine: Using SSH client type: native
	I1007 12:02:01.211275   72038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I1007 12:02:01.211297   72038 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:02:01.454370   72038 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:02:01.454399   72038 machine.go:96] duration metric: took 1.109609867s to provisionDockerMachine
	I1007 12:02:01.454413   72038 start.go:293] postStartSetup for "old-k8s-version-610504" (driver="kvm2")
	I1007 12:02:01.454426   72038 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:02:01.454472   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .DriverName
	I1007 12:02:01.454819   72038 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:02:01.454851   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHHostname
	I1007 12:02:01.458001   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.458408   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:01.458454   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.458603   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHPort
	I1007 12:02:01.458828   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:01.459025   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHUsername
	I1007 12:02:01.459378   72038 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/old-k8s-version-610504/id_rsa Username:docker}
	I1007 12:02:01.550142   72038 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:02:01.556260   72038 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:02:01.556286   72038 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 12:02:01.556356   72038 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 12:02:01.556452   72038 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 12:02:01.556571   72038 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:02:01.571646   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 12:02:01.597811   72038 start.go:296] duration metric: took 143.380583ms for postStartSetup
	I1007 12:02:01.597868   72038 fix.go:56] duration metric: took 20.172652206s for fixHost
	I1007 12:02:01.597919   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHHostname
	I1007 12:02:01.601412   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.601872   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:01.601910   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.602086   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHPort
	I1007 12:02:01.602325   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:01.602553   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:01.602762   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHUsername
	I1007 12:02:01.602992   72038 main.go:141] libmachine: Using SSH client type: native
	I1007 12:02:01.603205   72038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I1007 12:02:01.603217   72038 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:02:01.722305   72038 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302521.695584872
	
	I1007 12:02:01.722327   72038 fix.go:216] guest clock: 1728302521.695584872
	I1007 12:02:01.722334   72038 fix.go:229] Guest: 2024-10-07 12:02:01.695584872 +0000 UTC Remote: 2024-10-07 12:02:01.597874356 +0000 UTC m=+243.055755013 (delta=97.710516ms)
	I1007 12:02:01.722354   72038 fix.go:200] guest clock delta is within tolerance: 97.710516ms
	I1007 12:02:01.722359   72038 start.go:83] releasing machines lock for "old-k8s-version-610504", held for 20.297193468s
	I1007 12:02:01.722381   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .DriverName
	I1007 12:02:01.722629   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetIP
	I1007 12:02:01.725596   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.726021   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:01.726046   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.726245   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .DriverName
	I1007 12:02:01.726729   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .DriverName
	I1007 12:02:01.726884   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .DriverName
	I1007 12:02:01.726961   72038 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:02:01.727005   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHHostname
	I1007 12:02:01.727083   72038 ssh_runner.go:195] Run: cat /version.json
	I1007 12:02:01.727106   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHHostname
	I1007 12:02:01.729708   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.729744   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.730146   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:01.730172   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.730198   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:01.730211   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:01.730487   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHPort
	I1007 12:02:01.730554   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHPort
	I1007 12:02:01.730669   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:01.730714   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHKeyPath
	I1007 12:02:01.730830   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHUsername
	I1007 12:02:01.730843   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetSSHUsername
	I1007 12:02:01.730973   72038 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/old-k8s-version-610504/id_rsa Username:docker}
	I1007 12:02:01.730978   72038 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/old-k8s-version-610504/id_rsa Username:docker}
	I1007 12:02:01.837732   72038 ssh_runner.go:195] Run: systemctl --version
	I1007 12:02:01.846004   72038 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:02:01.999238   72038 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:02:02.006528   72038 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:02:02.006610   72038 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:02:02.029537   72038 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:02:02.029565   72038 start.go:495] detecting cgroup driver to use...
	I1007 12:02:02.029637   72038 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:02:02.054684   72038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:02:02.075591   72038 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:02:02.075659   72038 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:02:02.096433   72038 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:02:02.116711   72038 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:02:02.237713   72038 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:02:02.409548   72038 docker.go:233] disabling docker service ...
	I1007 12:02:02.409620   72038 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:02:02.430890   72038 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:02:02.450573   72038 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:02:02.645761   72038 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:02:02.829268   72038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:02:02.855226   72038 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:02:02.879591   72038 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1007 12:02:02.879662   72038 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:02:02.894016   72038 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:02:02.894119   72038 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:02:02.908468   72038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:02:02.922491   72038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:02:02.936704   72038 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:02:02.951471   72038 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:02:02.961840   72038 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:02:02.961917   72038 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:02:02.978606   72038 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:02:02.991878   72038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:02:03.115356   72038 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:02:03.222404   72038 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:02:03.222481   72038 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:02:03.227662   72038 start.go:563] Will wait 60s for crictl version
	I1007 12:02:03.227715   72038 ssh_runner.go:195] Run: which crictl
	I1007 12:02:03.231607   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:02:03.270777   72038 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:02:03.270859   72038 ssh_runner.go:195] Run: crio --version
	I1007 12:02:03.301225   72038 ssh_runner.go:195] Run: crio --version
	I1007 12:02:03.337860   72038 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1007 12:02:03.339088   72038 main.go:141] libmachine: (old-k8s-version-610504) Calling .GetIP
	I1007 12:02:03.342372   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:03.342786   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:a9:46", ip: ""} in network mk-old-k8s-version-610504: {Iface:virbr1 ExpiryTime:2024-10-07 12:51:58 +0000 UTC Type:0 Mac:52:54:00:0e:a9:46 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:old-k8s-version-610504 Clientid:01:52:54:00:0e:a9:46}
	I1007 12:02:03.342815   72038 main.go:141] libmachine: (old-k8s-version-610504) DBG | domain old-k8s-version-610504 has defined IP address 192.168.39.75 and MAC address 52:54:00:0e:a9:46 in network mk-old-k8s-version-610504
	I1007 12:02:03.343023   72038 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:02:03.347529   72038 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:02:03.360315   72038 kubeadm.go:883] updating cluster {Name:old-k8s-version-610504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:old-k8s-version-610504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:02:03.360462   72038 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 12:02:03.360513   72038 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:02:03.410064   72038 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1007 12:02:03.410159   72038 ssh_runner.go:195] Run: which lz4
	I1007 12:02:03.414432   72038 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:02:03.418852   72038 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:02:03.418884   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1007 12:02:01.751404   71483 main.go:141] libmachine: (embed-certs-475689) Calling .Start
	I1007 12:02:01.751621   71483 main.go:141] libmachine: (embed-certs-475689) Ensuring networks are active...
	I1007 12:02:01.752609   71483 main.go:141] libmachine: (embed-certs-475689) Ensuring network default is active
	I1007 12:02:01.753016   71483 main.go:141] libmachine: (embed-certs-475689) Ensuring network mk-embed-certs-475689 is active
	I1007 12:02:01.753492   71483 main.go:141] libmachine: (embed-certs-475689) Getting domain xml...
	I1007 12:02:01.754231   71483 main.go:141] libmachine: (embed-certs-475689) Creating domain...
	I1007 12:02:03.080076   71483 main.go:141] libmachine: (embed-certs-475689) Waiting to get IP...
	I1007 12:02:03.081005   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:03.081490   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:03.081566   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:03.081486   73068 retry.go:31] will retry after 232.695969ms: waiting for machine to come up
	I1007 12:02:03.316232   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:03.316929   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:03.316958   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:03.316876   73068 retry.go:31] will retry after 264.677664ms: waiting for machine to come up
	I1007 12:02:03.583220   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:03.583641   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:03.583672   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:03.583593   73068 retry.go:31] will retry after 396.437045ms: waiting for machine to come up
	I1007 12:02:00.538775   71606 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.07707595s)
	I1007 12:02:00.538804   71606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:00.763381   71606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:00.855210   71606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:00.965452   71606 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:02:00.965552   71606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:01.466105   71606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:01.966058   71606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:02.014192   71606 api_server.go:72] duration metric: took 1.048753512s to wait for apiserver process to appear ...
	I1007 12:02:02.014221   71606 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:02:02.014251   71606 api_server.go:253] Checking apiserver healthz at https://192.168.72.252:8443/healthz ...
	I1007 12:02:02.014793   71606 api_server.go:269] stopped: https://192.168.72.252:8443/healthz: Get "https://192.168.72.252:8443/healthz": dial tcp 192.168.72.252:8443: connect: connection refused
	I1007 12:02:02.514477   71606 api_server.go:253] Checking apiserver healthz at https://192.168.72.252:8443/healthz ...
	I1007 12:02:05.578098   71606 api_server.go:279] https://192.168.72.252:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 12:02:05.578131   71606 api_server.go:103] status: https://192.168.72.252:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 12:02:05.578150   71606 api_server.go:253] Checking apiserver healthz at https://192.168.72.252:8443/healthz ...
	I1007 12:02:05.601491   71606 api_server.go:279] https://192.168.72.252:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 12:02:05.601524   71606 api_server.go:103] status: https://192.168.72.252:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 12:02:06.015111   71606 api_server.go:253] Checking apiserver healthz at https://192.168.72.252:8443/healthz ...
	I1007 12:02:06.020248   71606 api_server.go:279] https://192.168.72.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:02:06.020280   71606 api_server.go:103] status: https://192.168.72.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:02:06.515307   71606 api_server.go:253] Checking apiserver healthz at https://192.168.72.252:8443/healthz ...
	I1007 12:02:06.522372   71606 api_server.go:279] https://192.168.72.252:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:02:06.522407   71606 api_server.go:103] status: https://192.168.72.252:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:02:07.015147   71606 api_server.go:253] Checking apiserver healthz at https://192.168.72.252:8443/healthz ...
	I1007 12:02:07.026588   71606 api_server.go:279] https://192.168.72.252:8443/healthz returned 200:
	ok
	I1007 12:02:07.041058   71606 api_server.go:141] control plane version: v1.31.1
	I1007 12:02:07.041093   71606 api_server.go:131] duration metric: took 5.026863042s to wait for apiserver health ...
	I1007 12:02:07.041104   71606 cni.go:84] Creating CNI manager for ""
	I1007 12:02:07.041113   71606 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:02:07.043122   71606 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 12:02:05.206319   72038 crio.go:462] duration metric: took 1.791923396s to copy over tarball
	I1007 12:02:05.206401   72038 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:02:03.982450   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:03.983097   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:03.983121   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:03.983053   73068 retry.go:31] will retry after 489.461264ms: waiting for machine to come up
	I1007 12:02:04.473894   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:04.474666   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:04.474695   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:04.474550   73068 retry.go:31] will retry after 630.286506ms: waiting for machine to come up
	I1007 12:02:05.106313   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:05.106884   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:05.106911   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:05.106849   73068 retry.go:31] will retry after 707.485155ms: waiting for machine to come up
	I1007 12:02:05.815739   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:05.816207   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:05.816236   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:05.816168   73068 retry.go:31] will retry after 785.071728ms: waiting for machine to come up
	I1007 12:02:06.603228   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:06.603969   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:06.604014   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:06.603909   73068 retry.go:31] will retry after 1.092796637s: waiting for machine to come up
	I1007 12:02:07.697944   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:07.698515   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:07.698545   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:07.698465   73068 retry.go:31] will retry after 1.642637177s: waiting for machine to come up
	I1007 12:02:07.044726   71606 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 12:02:07.061463   71606 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 12:02:07.085260   71606 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:02:07.112375   71606 system_pods.go:59] 8 kube-system pods found
	I1007 12:02:07.112481   71606 system_pods.go:61] "coredns-7c65d6cfc9-n6kzh" [5e7864af-f4a9-4940-8d8a-d7d0cd39eea9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:02:07.112505   71606 system_pods.go:61] "etcd-no-preload-236151" [08f1522e-a0a4-4095-acd6-6e8d84987310] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1007 12:02:07.112529   71606 system_pods.go:61] "kube-apiserver-no-preload-236151" [ba06faec-a5ad-49a7-92da-3208ca5a81ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 12:02:07.112547   71606 system_pods.go:61] "kube-controller-manager-no-preload-236151" [888c05ef-a163-4988-a3f7-4d31eca10c0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 12:02:07.112569   71606 system_pods.go:61] "kube-proxy-tvpkl" [0367357a-cb5c-4966-8756-5918b4989dec] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1007 12:02:07.112587   71606 system_pods.go:61] "kube-scheduler-no-preload-236151" [3eaf3ed4-c654-4c8b-b60b-8fa401d463ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1007 12:02:07.112604   71606 system_pods.go:61] "metrics-server-6867b74b74-kg48w" [ee3e0d2d-4974-4fd3-83f0-0782585e106a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 12:02:07.112618   71606 system_pods.go:61] "storage-provisioner" [c8180361-3ca2-4195-8537-0b30d67a4a21] Running
	I1007 12:02:07.112636   71606 system_pods.go:74] duration metric: took 27.342773ms to wait for pod list to return data ...
	I1007 12:02:07.112653   71606 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:02:07.133310   71606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:02:07.133392   71606 node_conditions.go:123] node cpu capacity is 2
	I1007 12:02:07.133436   71606 node_conditions.go:105] duration metric: took 20.765047ms to run NodePressure ...
	I1007 12:02:07.133468   71606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:07.466381   71606 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1007 12:02:07.471251   71606 kubeadm.go:739] kubelet initialised
	I1007 12:02:07.471274   71606 kubeadm.go:740] duration metric: took 4.866096ms waiting for restarted kubelet to initialise ...
	I1007 12:02:07.471287   71606 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:02:07.476660   71606 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-n6kzh" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:08.607763   72038 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.401230374s)
	I1007 12:02:08.607811   72038 crio.go:469] duration metric: took 3.401459779s to extract the tarball
	I1007 12:02:08.607821   72038 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:02:08.660305   72038 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:02:08.702981   72038 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1007 12:02:08.703004   72038 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 12:02:08.703041   72038 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:02:08.703100   72038 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 12:02:08.703152   72038 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 12:02:08.703187   72038 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1007 12:02:08.703212   72038 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1007 12:02:08.703328   72038 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1007 12:02:08.703350   72038 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 12:02:08.703140   72038 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 12:02:08.704747   72038 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1007 12:02:08.704767   72038 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 12:02:08.704789   72038 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 12:02:08.704798   72038 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 12:02:08.704749   72038 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1007 12:02:08.704755   72038 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:02:08.705062   72038 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1007 12:02:08.704748   72038 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 12:02:08.855383   72038 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1007 12:02:08.858263   72038 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1007 12:02:08.859746   72038 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 12:02:08.876395   72038 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1007 12:02:08.898074   72038 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1007 12:02:08.898759   72038 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1007 12:02:08.924284   72038 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1007 12:02:08.941554   72038 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1007 12:02:08.941605   72038 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1007 12:02:08.941653   72038 ssh_runner.go:195] Run: which crictl
	I1007 12:02:09.055724   72038 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1007 12:02:09.055776   72038 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 12:02:09.055833   72038 ssh_runner.go:195] Run: which crictl
	I1007 12:02:09.058847   72038 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1007 12:02:09.058891   72038 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 12:02:09.058935   72038 ssh_runner.go:195] Run: which crictl
	I1007 12:02:09.082211   72038 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1007 12:02:09.082266   72038 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 12:02:09.082231   72038 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1007 12:02:09.082371   72038 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1007 12:02:09.082304   72038 ssh_runner.go:195] Run: which crictl
	I1007 12:02:09.082422   72038 ssh_runner.go:195] Run: which crictl
	I1007 12:02:09.105514   72038 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1007 12:02:09.105563   72038 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1007 12:02:09.105612   72038 ssh_runner.go:195] Run: which crictl
	I1007 12:02:09.105640   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 12:02:09.105610   72038 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1007 12:02:09.105713   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 12:02:09.105739   72038 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 12:02:09.105742   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 12:02:09.105664   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 12:02:09.105767   72038 ssh_runner.go:195] Run: which crictl
	I1007 12:02:09.105779   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 12:02:09.184102   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 12:02:09.285187   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 12:02:09.285284   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 12:02:09.285371   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 12:02:09.285467   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 12:02:09.285496   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 12:02:09.285533   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 12:02:09.285593   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 12:02:09.390073   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 12:02:09.465639   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 12:02:09.465689   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 12:02:09.465735   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 12:02:09.465772   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 12:02:09.465825   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 12:02:09.465902   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 12:02:09.468691   72038 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1007 12:02:09.611390   72038 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1007 12:02:09.611473   72038 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1007 12:02:09.611571   72038 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1007 12:02:09.619529   72038 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1007 12:02:09.619596   72038 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1007 12:02:09.619716   72038 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 12:02:09.668096   72038 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1007 12:02:09.886655   72038 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:02:10.044294   72038 cache_images.go:92] duration metric: took 1.341266632s to LoadCachedImages
	W1007 12:02:10.044371   72038 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1007 12:02:10.044388   72038 kubeadm.go:934] updating node { 192.168.39.75 8443 v1.20.0 crio true true} ...
	I1007 12:02:10.044498   72038 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-610504 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.75
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-610504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:02:10.044584   72038 ssh_runner.go:195] Run: crio config
	I1007 12:02:10.095188   72038 cni.go:84] Creating CNI manager for ""
	I1007 12:02:10.095214   72038 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:02:10.095225   72038 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:02:10.095247   72038 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.75 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-610504 NodeName:old-k8s-version-610504 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.75 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1007 12:02:10.095410   72038 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.75
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-610504"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.75
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.75"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:02:10.095487   72038 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1007 12:02:10.106112   72038 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:02:10.106179   72038 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 12:02:10.117330   72038 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1007 12:02:10.138322   72038 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:02:10.156445   72038 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1007 12:02:10.176194   72038 ssh_runner.go:195] Run: grep 192.168.39.75	control-plane.minikube.internal$ /etc/hosts
	I1007 12:02:10.180507   72038 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.75	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:02:10.197309   72038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:02:10.326011   72038 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:02:10.344843   72038 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/old-k8s-version-610504 for IP: 192.168.39.75
	I1007 12:02:10.344870   72038 certs.go:194] generating shared ca certs ...
	I1007 12:02:10.344892   72038 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:02:10.345087   72038 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 12:02:10.345152   72038 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 12:02:10.345165   72038 certs.go:256] generating profile certs ...
	I1007 12:02:10.345305   72038 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/old-k8s-version-610504/client.key
	I1007 12:02:10.345387   72038 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/old-k8s-version-610504/apiserver.key.509d276d
	I1007 12:02:10.345460   72038 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/old-k8s-version-610504/proxy-client.key
	I1007 12:02:10.345644   72038 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 12:02:10.345690   72038 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 12:02:10.345706   72038 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 12:02:10.345744   72038 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:02:10.345782   72038 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:02:10.345824   72038 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 12:02:10.345885   72038 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 12:02:10.346718   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:02:10.410787   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:02:10.449921   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:02:10.483753   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:02:10.521662   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/old-k8s-version-610504/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 12:02:10.567002   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/old-k8s-version-610504/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:02:10.620619   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/old-k8s-version-610504/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:02:10.666047   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/old-k8s-version-610504/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 12:02:10.694405   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:02:10.723062   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 12:02:10.750263   72038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 12:02:10.777276   72038 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:02:10.797045   72038 ssh_runner.go:195] Run: openssl version
	I1007 12:02:10.803351   72038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:02:10.815772   72038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:02:10.821392   72038 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:02:10.821448   72038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:02:10.827933   72038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:02:10.840702   72038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 12:02:10.852593   72038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 12:02:10.857456   72038 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 12:02:10.857523   72038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 12:02:10.863355   72038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 12:02:10.875369   72038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 12:02:10.887947   72038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 12:02:10.892857   72038 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 12:02:10.892915   72038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 12:02:10.899145   72038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:02:10.911064   72038 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:02:10.916000   72038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:02:10.922614   72038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:02:10.928709   72038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:02:10.936925   72038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:02:10.943004   72038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:02:10.950290   72038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:02:10.956856   72038 kubeadm.go:392] StartCluster: {Name:old-k8s-version-610504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.20.0 ClusterName:old-k8s-version-610504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:02:10.956960   72038 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:02:10.957033   72038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:02:10.999579   72038 cri.go:89] found id: ""
	I1007 12:02:10.999641   72038 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:02:11.010294   72038 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 12:02:11.010312   72038 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 12:02:11.010360   72038 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 12:02:11.021081   72038 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 12:02:11.021988   72038 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-610504" does not appear in /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 12:02:11.022639   72038 kubeconfig.go:62] /home/jenkins/minikube-integration/19761-3912/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-610504" cluster setting kubeconfig missing "old-k8s-version-610504" context setting]
	I1007 12:02:11.023627   72038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/kubeconfig: {Name:mkc8a5ce1dbafe55e056433fff5c065506f83346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:02:11.025562   72038 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 12:02:11.035827   72038 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.75
	I1007 12:02:11.035859   72038 kubeadm.go:1160] stopping kube-system containers ...
	I1007 12:02:11.035871   72038 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1007 12:02:11.035915   72038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:02:11.077088   72038 cri.go:89] found id: ""
	I1007 12:02:11.077166   72038 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 12:02:11.095348   72038 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:02:11.105595   72038 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:02:11.105620   72038 kubeadm.go:157] found existing configuration files:
	
	I1007 12:02:11.105672   72038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:02:11.115389   72038 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:02:11.115444   72038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:02:11.125611   72038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:02:11.135252   72038 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:02:11.135369   72038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:02:11.145561   72038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:02:11.155534   72038 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:02:11.155747   72038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:02:11.168054   72038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:02:11.180354   72038 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:02:11.180423   72038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:02:11.191342   72038 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:02:11.201994   72038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:11.332861   72038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:11.888839   72038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:12.155655   72038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:12.323617   72038 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:12.449212   72038 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:02:12.449328   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:12.949834   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:13.450180   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:09.342833   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:09.343471   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:09.343496   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:09.343423   73068 retry.go:31] will retry after 2.237454734s: waiting for machine to come up
	I1007 12:02:11.582180   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:11.582735   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:11.582768   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:11.582684   73068 retry.go:31] will retry after 1.889982763s: waiting for machine to come up
	I1007 12:02:13.474389   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:13.475061   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:13.475094   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:13.474975   73068 retry.go:31] will retry after 3.563083303s: waiting for machine to come up
	I1007 12:02:10.254474   71606 pod_ready.go:103] pod "coredns-7c65d6cfc9-n6kzh" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:12.483678   71606 pod_ready.go:103] pod "coredns-7c65d6cfc9-n6kzh" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:13.949720   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:14.450121   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:14.949394   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:15.450135   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:15.950223   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:16.449766   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:16.950088   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:17.449720   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:17.949351   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:18.450309   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:17.039359   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:17.039837   71483 main.go:141] libmachine: (embed-certs-475689) DBG | unable to find current IP address of domain embed-certs-475689 in network mk-embed-certs-475689
	I1007 12:02:17.039861   71483 main.go:141] libmachine: (embed-certs-475689) DBG | I1007 12:02:17.039768   73068 retry.go:31] will retry after 3.396419578s: waiting for machine to come up
	I1007 12:02:14.983359   71606 pod_ready.go:103] pod "coredns-7c65d6cfc9-n6kzh" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:16.985981   71606 pod_ready.go:103] pod "coredns-7c65d6cfc9-n6kzh" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:18.483216   71606 pod_ready.go:93] pod "coredns-7c65d6cfc9-n6kzh" in "kube-system" namespace has status "Ready":"True"
	I1007 12:02:18.483237   71606 pod_ready.go:82] duration metric: took 11.006551473s for pod "coredns-7c65d6cfc9-n6kzh" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:18.483247   71606 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-236151" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:18.488830   71606 pod_ready.go:93] pod "etcd-no-preload-236151" in "kube-system" namespace has status "Ready":"True"
	I1007 12:02:18.488849   71606 pod_ready.go:82] duration metric: took 5.596846ms for pod "etcd-no-preload-236151" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:18.488858   71606 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-236151" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:18.494245   71606 pod_ready.go:93] pod "kube-apiserver-no-preload-236151" in "kube-system" namespace has status "Ready":"True"
	I1007 12:02:18.494268   71606 pod_ready.go:82] duration metric: took 5.402453ms for pod "kube-apiserver-no-preload-236151" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:18.494281   71606 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-236151" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:18.501966   71606 pod_ready.go:93] pod "kube-controller-manager-no-preload-236151" in "kube-system" namespace has status "Ready":"True"
	I1007 12:02:18.501987   71606 pod_ready.go:82] duration metric: took 7.696721ms for pod "kube-controller-manager-no-preload-236151" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:18.501995   71606 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tvpkl" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:18.506175   71606 pod_ready.go:93] pod "kube-proxy-tvpkl" in "kube-system" namespace has status "Ready":"True"
	I1007 12:02:18.506200   71606 pod_ready.go:82] duration metric: took 4.197198ms for pod "kube-proxy-tvpkl" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:18.506212   71606 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-236151" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:18.881595   71606 pod_ready.go:93] pod "kube-scheduler-no-preload-236151" in "kube-system" namespace has status "Ready":"True"
	I1007 12:02:18.881622   71606 pod_ready.go:82] duration metric: took 375.402406ms for pod "kube-scheduler-no-preload-236151" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:18.881637   71606 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:20.437331   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.437800   71483 main.go:141] libmachine: (embed-certs-475689) Found IP for machine: 192.168.50.37
	I1007 12:02:20.437827   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has current primary IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.437849   71483 main.go:141] libmachine: (embed-certs-475689) Reserving static IP address...
	I1007 12:02:20.438327   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "embed-certs-475689", mac: "52:54:00:7e:d2:45", ip: "192.168.50.37"} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:20.438358   71483 main.go:141] libmachine: (embed-certs-475689) Reserved static IP address: 192.168.50.37
	I1007 12:02:20.438370   71483 main.go:141] libmachine: (embed-certs-475689) DBG | skip adding static IP to network mk-embed-certs-475689 - found existing host DHCP lease matching {name: "embed-certs-475689", mac: "52:54:00:7e:d2:45", ip: "192.168.50.37"}
	I1007 12:02:20.438378   71483 main.go:141] libmachine: (embed-certs-475689) Waiting for SSH to be available...
	I1007 12:02:20.438384   71483 main.go:141] libmachine: (embed-certs-475689) DBG | Getting to WaitForSSH function...
	I1007 12:02:20.440601   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.440885   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:20.440917   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.441044   71483 main.go:141] libmachine: (embed-certs-475689) DBG | Using SSH client type: external
	I1007 12:02:20.441064   71483 main.go:141] libmachine: (embed-certs-475689) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/embed-certs-475689/id_rsa (-rw-------)
	I1007 12:02:20.441087   71483 main.go:141] libmachine: (embed-certs-475689) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/embed-certs-475689/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:02:20.441104   71483 main.go:141] libmachine: (embed-certs-475689) DBG | About to run SSH command:
	I1007 12:02:20.441111   71483 main.go:141] libmachine: (embed-certs-475689) DBG | exit 0
	I1007 12:02:20.572015   71483 main.go:141] libmachine: (embed-certs-475689) DBG | SSH cmd err, output: <nil>: 
	I1007 12:02:20.572334   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetConfigRaw
	I1007 12:02:20.572930   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetIP
	I1007 12:02:20.575323   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.575659   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:20.575680   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.575973   71483 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/embed-certs-475689/config.json ...
	I1007 12:02:20.576193   71483 machine.go:93] provisionDockerMachine start ...
	I1007 12:02:20.576210   71483 main.go:141] libmachine: (embed-certs-475689) Calling .DriverName
	I1007 12:02:20.576375   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:20.578543   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.578832   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:20.578866   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.578967   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHPort
	I1007 12:02:20.579123   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:20.579289   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:20.579414   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHUsername
	I1007 12:02:20.579566   71483 main.go:141] libmachine: Using SSH client type: native
	I1007 12:02:20.579730   71483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I1007 12:02:20.579740   71483 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:02:20.700897   71483 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 12:02:20.700928   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetMachineName
	I1007 12:02:20.701150   71483 buildroot.go:166] provisioning hostname "embed-certs-475689"
	I1007 12:02:20.701181   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetMachineName
	I1007 12:02:20.701344   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:20.703852   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.704194   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:20.704222   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.704361   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHPort
	I1007 12:02:20.704540   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:20.704655   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:20.704746   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHUsername
	I1007 12:02:20.704889   71483 main.go:141] libmachine: Using SSH client type: native
	I1007 12:02:20.705053   71483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I1007 12:02:20.705066   71483 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-475689 && echo "embed-certs-475689" | sudo tee /etc/hostname
	I1007 12:02:20.835902   71483 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-475689
	
	I1007 12:02:20.835929   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:20.838724   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.839093   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:20.839126   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.839352   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHPort
	I1007 12:02:20.839577   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:20.839748   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:20.839909   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHUsername
	I1007 12:02:20.840075   71483 main.go:141] libmachine: Using SSH client type: native
	I1007 12:02:20.840240   71483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I1007 12:02:20.840255   71483 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-475689' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-475689/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-475689' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:02:20.966075   71483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:02:20.966112   71483 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 12:02:20.966131   71483 buildroot.go:174] setting up certificates
	I1007 12:02:20.966143   71483 provision.go:84] configureAuth start
	I1007 12:02:20.966156   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetMachineName
	I1007 12:02:20.966450   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetIP
	I1007 12:02:20.969040   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.969413   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:20.969435   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.969599   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:20.971786   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.972165   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:20.972190   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:20.972375   71483 provision.go:143] copyHostCerts
	I1007 12:02:20.972436   71483 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 12:02:20.972457   71483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 12:02:20.972536   71483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 12:02:20.972661   71483 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 12:02:20.972672   71483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 12:02:20.972702   71483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 12:02:20.972791   71483 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 12:02:20.972802   71483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 12:02:20.972829   71483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 12:02:20.972911   71483 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.embed-certs-475689 san=[127.0.0.1 192.168.50.37 embed-certs-475689 localhost minikube]
	I1007 12:02:21.274474   71483 provision.go:177] copyRemoteCerts
	I1007 12:02:21.274526   71483 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:02:21.274553   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:21.277342   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.277611   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:21.277639   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.277779   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHPort
	I1007 12:02:21.277963   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:21.278081   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHUsername
	I1007 12:02:21.278182   71483 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/embed-certs-475689/id_rsa Username:docker}
	I1007 12:02:21.367068   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1007 12:02:21.392479   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:02:21.417489   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:02:21.441206   71483 provision.go:87] duration metric: took 475.049984ms to configureAuth
	I1007 12:02:21.441237   71483 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:02:21.441424   71483 config.go:182] Loaded profile config "embed-certs-475689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:02:21.441492   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:21.444481   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.444800   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:21.444831   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.444969   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHPort
	I1007 12:02:21.445178   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:21.445363   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:21.445512   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHUsername
	I1007 12:02:21.445655   71483 main.go:141] libmachine: Using SSH client type: native
	I1007 12:02:21.445904   71483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I1007 12:02:21.445931   71483 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:02:21.694303   71483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:02:21.694331   71483 machine.go:96] duration metric: took 1.11812492s to provisionDockerMachine
	I1007 12:02:21.694344   71483 start.go:293] postStartSetup for "embed-certs-475689" (driver="kvm2")
	I1007 12:02:21.694356   71483 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:02:21.694372   71483 main.go:141] libmachine: (embed-certs-475689) Calling .DriverName
	I1007 12:02:21.694902   71483 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:02:21.694928   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:21.697829   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.698183   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:21.698212   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.698395   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHPort
	I1007 12:02:21.698572   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:21.698715   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHUsername
	I1007 12:02:21.698814   71483 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/embed-certs-475689/id_rsa Username:docker}
	I1007 12:02:21.788556   71483 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:02:21.793285   71483 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:02:21.793309   71483 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 12:02:21.793390   71483 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 12:02:21.793476   71483 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 12:02:21.793580   71483 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:02:21.803926   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 12:02:21.829222   71483 start.go:296] duration metric: took 134.865771ms for postStartSetup
	I1007 12:02:21.829272   71483 fix.go:56] duration metric: took 20.106759818s for fixHost
	I1007 12:02:21.829296   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:21.831699   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.832114   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:21.832144   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.832339   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHPort
	I1007 12:02:21.832511   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:21.832658   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:21.832796   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHUsername
	I1007 12:02:21.832928   71483 main.go:141] libmachine: Using SSH client type: native
	I1007 12:02:21.833073   71483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.37 22 <nil> <nil>}
	I1007 12:02:21.833085   71483 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:02:21.948826   71483 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302541.904058680
	
	I1007 12:02:21.948846   71483 fix.go:216] guest clock: 1728302541.904058680
	I1007 12:02:21.948853   71483 fix.go:229] Guest: 2024-10-07 12:02:21.90405868 +0000 UTC Remote: 2024-10-07 12:02:21.829277254 +0000 UTC m=+337.969440012 (delta=74.781426ms)
	I1007 12:02:21.948872   71483 fix.go:200] guest clock delta is within tolerance: 74.781426ms
	I1007 12:02:21.948880   71483 start.go:83] releasing machines lock for "embed-certs-475689", held for 20.226404497s
	I1007 12:02:21.948898   71483 main.go:141] libmachine: (embed-certs-475689) Calling .DriverName
	I1007 12:02:21.949175   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetIP
	I1007 12:02:21.952109   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.952386   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:21.952407   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.952584   71483 main.go:141] libmachine: (embed-certs-475689) Calling .DriverName
	I1007 12:02:21.953135   71483 main.go:141] libmachine: (embed-certs-475689) Calling .DriverName
	I1007 12:02:21.953304   71483 main.go:141] libmachine: (embed-certs-475689) Calling .DriverName
	I1007 12:02:21.953387   71483 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:02:21.953428   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:21.953521   71483 ssh_runner.go:195] Run: cat /version.json
	I1007 12:02:21.953546   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:21.955817   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.956155   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.956204   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:21.956227   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.956410   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHPort
	I1007 12:02:21.956583   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:21.956622   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:21.956651   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:21.956750   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHUsername
	I1007 12:02:21.956805   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHPort
	I1007 12:02:21.956886   71483 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/embed-certs-475689/id_rsa Username:docker}
	I1007 12:02:21.956990   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:21.957137   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHUsername
	I1007 12:02:21.957274   71483 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/embed-certs-475689/id_rsa Username:docker}
	I1007 12:02:22.063367   71483 ssh_runner.go:195] Run: systemctl --version
	I1007 12:02:22.069800   71483 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:02:22.218291   71483 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:02:22.224364   71483 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:02:22.224454   71483 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:02:22.240812   71483 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:02:22.240861   71483 start.go:495] detecting cgroup driver to use...
	I1007 12:02:22.240933   71483 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:02:22.256753   71483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:02:22.270673   71483 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:02:22.270737   71483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:02:22.284209   71483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:02:22.297656   71483 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:02:22.412088   71483 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:02:22.594303   71483 docker.go:233] disabling docker service ...
	I1007 12:02:22.594362   71483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:02:22.608824   71483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:02:22.624099   71483 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:02:22.744560   71483 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:02:22.859973   71483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:02:22.873871   71483 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:02:22.893531   71483 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:02:22.893634   71483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:02:22.903904   71483 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:02:22.903963   71483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:02:22.914041   71483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:02:22.924319   71483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:02:22.934716   71483 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:02:22.945154   71483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:02:22.956602   71483 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:02:22.977324   71483 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:02:22.988568   71483 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:02:22.998299   71483 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:02:22.998361   71483 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:02:23.014074   71483 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:02:23.023944   71483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:02:23.149828   71483 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:02:23.238819   71483 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:02:23.238888   71483 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:02:23.244024   71483 start.go:563] Will wait 60s for crictl version
	I1007 12:02:23.244099   71483 ssh_runner.go:195] Run: which crictl
	I1007 12:02:23.247702   71483 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:02:23.288871   71483 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:02:23.288954   71483 ssh_runner.go:195] Run: crio --version
	I1007 12:02:23.317450   71483 ssh_runner.go:195] Run: crio --version
	I1007 12:02:23.349017   71483 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:02:18.950072   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:19.449922   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:19.949941   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:20.449408   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:20.949492   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:21.450050   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:21.949390   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:22.450072   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:22.950372   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:23.449563   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:23.350484   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetIP
	I1007 12:02:23.353097   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:23.353442   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:23.353472   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:23.353657   71483 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1007 12:02:23.357814   71483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:02:23.371204   71483 kubeadm.go:883] updating cluster {Name:embed-certs-475689 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:embed-certs-475689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:02:23.371330   71483 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:02:23.371392   71483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:02:23.408852   71483 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:02:23.408906   71483 ssh_runner.go:195] Run: which lz4
	I1007 12:02:23.413142   71483 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:02:23.417308   71483 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:02:23.417342   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:02:20.888893   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:23.388396   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:23.949877   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:24.450038   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:24.949563   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:25.450102   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:25.950291   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:26.449404   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:26.949511   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:27.449629   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:27.949690   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:28.450262   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:24.826990   71483 crio.go:462] duration metric: took 1.413873464s to copy over tarball
	I1007 12:02:24.827120   71483 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:02:26.936069   71483 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.108908272s)
	I1007 12:02:26.936107   71483 crio.go:469] duration metric: took 2.109079866s to extract the tarball
	I1007 12:02:26.936117   71483 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:02:26.979765   71483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:02:27.027765   71483 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:02:27.027787   71483 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:02:27.027797   71483 kubeadm.go:934] updating node { 192.168.50.37 8443 v1.31.1 crio true true} ...
	I1007 12:02:27.027891   71483 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-475689 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-475689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:02:27.027972   71483 ssh_runner.go:195] Run: crio config
	I1007 12:02:27.074674   71483 cni.go:84] Creating CNI manager for ""
	I1007 12:02:27.074698   71483 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:02:27.074709   71483 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:02:27.074745   71483 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.37 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-475689 NodeName:embed-certs-475689 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:02:27.074928   71483 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-475689"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.37
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.37"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:02:27.074999   71483 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:02:27.085235   71483 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:02:27.085305   71483 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 12:02:27.095818   71483 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1007 12:02:27.113777   71483 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:02:27.131634   71483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1007 12:02:27.150324   71483 ssh_runner.go:195] Run: grep 192.168.50.37	control-plane.minikube.internal$ /etc/hosts
	I1007 12:02:27.154403   71483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:02:27.167713   71483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:02:27.309264   71483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:02:27.328285   71483 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/embed-certs-475689 for IP: 192.168.50.37
	I1007 12:02:27.328304   71483 certs.go:194] generating shared ca certs ...
	I1007 12:02:27.328318   71483 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:02:27.328462   71483 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 12:02:27.328537   71483 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 12:02:27.328554   71483 certs.go:256] generating profile certs ...
	I1007 12:02:27.328655   71483 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/embed-certs-475689/client.key
	I1007 12:02:27.328731   71483 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/embed-certs-475689/apiserver.key.1e91e694
	I1007 12:02:27.328789   71483 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/embed-certs-475689/proxy-client.key
	I1007 12:02:27.328948   71483 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 12:02:27.329000   71483 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 12:02:27.329014   71483 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 12:02:27.329055   71483 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:02:27.329091   71483 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:02:27.329127   71483 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 12:02:27.329194   71483 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 12:02:27.329896   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:02:27.368833   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:02:27.404365   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:02:27.440754   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:02:27.477560   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/embed-certs-475689/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1007 12:02:27.519335   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/embed-certs-475689/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:02:27.545194   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/embed-certs-475689/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:02:27.571249   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/embed-certs-475689/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:02:27.596143   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 12:02:27.621322   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:02:27.646231   71483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 12:02:27.670863   71483 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:02:27.687914   71483 ssh_runner.go:195] Run: openssl version
	I1007 12:02:27.693885   71483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 12:02:27.704970   71483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 12:02:27.709700   71483 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 12:02:27.709753   71483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 12:02:27.715701   71483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:02:27.726415   71483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:02:27.737264   71483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:02:27.741694   71483 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:02:27.741747   71483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:02:27.747752   71483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:02:27.759116   71483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 12:02:27.770295   71483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 12:02:27.775151   71483 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 12:02:27.775216   71483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 12:02:27.781253   71483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 12:02:27.792812   71483 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:02:27.797820   71483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:02:27.804083   71483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:02:27.810338   71483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:02:27.816719   71483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:02:27.822801   71483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:02:27.828823   71483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:02:27.834639   71483 kubeadm.go:392] StartCluster: {Name:embed-certs-475689 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:embed-certs-475689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:02:27.834746   71483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:02:27.834797   71483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:02:27.872525   71483 cri.go:89] found id: ""
	I1007 12:02:27.872604   71483 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:02:27.883023   71483 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 12:02:27.883045   71483 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 12:02:27.883083   71483 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 12:02:27.893846   71483 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 12:02:27.895212   71483 kubeconfig.go:125] found "embed-certs-475689" server: "https://192.168.50.37:8443"
	I1007 12:02:27.897131   71483 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 12:02:27.907288   71483 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.37
	I1007 12:02:27.907313   71483 kubeadm.go:1160] stopping kube-system containers ...
	I1007 12:02:27.907325   71483 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1007 12:02:27.907392   71483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:02:27.950963   71483 cri.go:89] found id: ""
	I1007 12:02:27.951022   71483 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 12:02:27.967704   71483 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:02:27.977978   71483 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:02:27.978001   71483 kubeadm.go:157] found existing configuration files:
	
	I1007 12:02:27.978049   71483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:02:27.987944   71483 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:02:27.988031   71483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:02:27.998030   71483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:02:28.007142   71483 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:02:28.007206   71483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:02:28.017047   71483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:02:28.026373   71483 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:02:28.026437   71483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:02:28.036832   71483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:02:28.046149   71483 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:02:28.046227   71483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:02:28.056425   71483 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:02:28.066871   71483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:28.186019   71483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:25.390564   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:27.894304   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:28.950184   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:29.450054   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:29.949410   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:30.449493   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:30.949880   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:31.449784   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:31.949713   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:32.450201   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:32.950251   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:33.449567   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:29.124335   71483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:29.330325   71483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:29.399960   71483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:29.505460   71483 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:02:29.505561   71483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:30.006115   71483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:30.506418   71483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:31.006305   71483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:31.505704   71483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:31.522206   71483 api_server.go:72] duration metric: took 2.016756304s to wait for apiserver process to appear ...
	I1007 12:02:31.522235   71483 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:02:31.522252   71483 api_server.go:253] Checking apiserver healthz at https://192.168.50.37:8443/healthz ...
	I1007 12:02:30.388996   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:32.889711   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:34.034275   71483 api_server.go:279] https://192.168.50.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 12:02:34.034307   71483 api_server.go:103] status: https://192.168.50.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 12:02:34.034319   71483 api_server.go:253] Checking apiserver healthz at https://192.168.50.37:8443/healthz ...
	I1007 12:02:34.070241   71483 api_server.go:279] https://192.168.50.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 12:02:34.070267   71483 api_server.go:103] status: https://192.168.50.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 12:02:34.522291   71483 api_server.go:253] Checking apiserver healthz at https://192.168.50.37:8443/healthz ...
	I1007 12:02:34.528367   71483 api_server.go:279] https://192.168.50.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:02:34.528404   71483 api_server.go:103] status: https://192.168.50.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:02:35.023012   71483 api_server.go:253] Checking apiserver healthz at https://192.168.50.37:8443/healthz ...
	I1007 12:02:35.029400   71483 api_server.go:279] https://192.168.50.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:02:35.029435   71483 api_server.go:103] status: https://192.168.50.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:02:35.523047   71483 api_server.go:253] Checking apiserver healthz at https://192.168.50.37:8443/healthz ...
	I1007 12:02:35.527640   71483 api_server.go:279] https://192.168.50.37:8443/healthz returned 200:
	ok
	I1007 12:02:35.534441   71483 api_server.go:141] control plane version: v1.31.1
	I1007 12:02:35.534470   71483 api_server.go:131] duration metric: took 4.012229438s to wait for apiserver health ...
	I1007 12:02:35.534480   71483 cni.go:84] Creating CNI manager for ""
	I1007 12:02:35.534486   71483 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:02:35.536397   71483 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 12:02:35.537683   71483 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 12:02:35.551065   71483 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 12:02:35.575276   71483 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:02:35.587946   71483 system_pods.go:59] 8 kube-system pods found
	I1007 12:02:35.588007   71483 system_pods.go:61] "coredns-7c65d6cfc9-vtxt8" [2c229648-005a-4ac9-8e85-8f84c70f1666] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:02:35.588023   71483 system_pods.go:61] "etcd-embed-certs-475689" [6d194c6f-6c2d-4b7c-8c09-ccc8fe593eb6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1007 12:02:35.588033   71483 system_pods.go:61] "kube-apiserver-embed-certs-475689" [153bd366-98a4-4120-829b-079aa36d1749] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 12:02:35.588041   71483 system_pods.go:61] "kube-controller-manager-embed-certs-475689" [1e5a1f54-0014-4bb1-8f24-25e3a1f43c21] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 12:02:35.588053   71483 system_pods.go:61] "kube-proxy-6l84n" [db77c258-9a9e-425e-8caa-8e956d9d9d06] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1007 12:02:35.588076   71483 system_pods.go:61] "kube-scheduler-embed-certs-475689" [79f339b5-3e85-4882-9f01-a9967918b6d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1007 12:02:35.588089   71483 system_pods.go:61] "metrics-server-6867b74b74-cld8v" [572f63d3-0ae3-4ba9-bbdd-d0397def13fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 12:02:35.588107   71483 system_pods.go:61] "storage-provisioner" [0e11ad6e-3e21-4fa1-81c0-0c5a0870668b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1007 12:02:35.588116   71483 system_pods.go:74] duration metric: took 12.82025ms to wait for pod list to return data ...
	I1007 12:02:35.588126   71483 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:02:35.592274   71483 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:02:35.592303   71483 node_conditions.go:123] node cpu capacity is 2
	I1007 12:02:35.592315   71483 node_conditions.go:105] duration metric: took 4.184585ms to run NodePressure ...
	I1007 12:02:35.592334   71483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:02:35.884088   71483 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1007 12:02:35.889039   71483 kubeadm.go:739] kubelet initialised
	I1007 12:02:35.889078   71483 kubeadm.go:740] duration metric: took 4.949962ms waiting for restarted kubelet to initialise ...
	I1007 12:02:35.889089   71483 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:02:35.895138   71483 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vtxt8" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:35.908256   71483 pod_ready.go:98] node "embed-certs-475689" hosting pod "coredns-7c65d6cfc9-vtxt8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:35.908291   71483 pod_ready.go:82] duration metric: took 13.099156ms for pod "coredns-7c65d6cfc9-vtxt8" in "kube-system" namespace to be "Ready" ...
	E1007 12:02:35.908304   71483 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-475689" hosting pod "coredns-7c65d6cfc9-vtxt8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:35.908316   71483 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:35.913729   71483 pod_ready.go:98] node "embed-certs-475689" hosting pod "etcd-embed-certs-475689" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:35.913761   71483 pod_ready.go:82] duration metric: took 5.436149ms for pod "etcd-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	E1007 12:02:35.913769   71483 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-475689" hosting pod "etcd-embed-certs-475689" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:35.913776   71483 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:35.918914   71483 pod_ready.go:98] node "embed-certs-475689" hosting pod "kube-apiserver-embed-certs-475689" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:35.918940   71483 pod_ready.go:82] duration metric: took 5.15749ms for pod "kube-apiserver-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	E1007 12:02:35.918949   71483 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-475689" hosting pod "kube-apiserver-embed-certs-475689" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:35.918956   71483 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:35.978800   71483 pod_ready.go:98] node "embed-certs-475689" hosting pod "kube-controller-manager-embed-certs-475689" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:35.978831   71483 pod_ready.go:82] duration metric: took 59.867878ms for pod "kube-controller-manager-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	E1007 12:02:35.978841   71483 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-475689" hosting pod "kube-controller-manager-embed-certs-475689" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:35.978847   71483 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-6l84n" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:36.378702   71483 pod_ready.go:98] node "embed-certs-475689" hosting pod "kube-proxy-6l84n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:36.378732   71483 pod_ready.go:82] duration metric: took 399.878154ms for pod "kube-proxy-6l84n" in "kube-system" namespace to be "Ready" ...
	E1007 12:02:36.378741   71483 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-475689" hosting pod "kube-proxy-6l84n" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:36.378747   71483 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:36.778547   71483 pod_ready.go:98] node "embed-certs-475689" hosting pod "kube-scheduler-embed-certs-475689" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:36.778571   71483 pod_ready.go:82] duration metric: took 399.818595ms for pod "kube-scheduler-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	E1007 12:02:36.778580   71483 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-475689" hosting pod "kube-scheduler-embed-certs-475689" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:36.778587   71483 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-cld8v" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:37.179756   71483 pod_ready.go:98] node "embed-certs-475689" hosting pod "metrics-server-6867b74b74-cld8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:37.179784   71483 pod_ready.go:82] duration metric: took 401.189333ms for pod "metrics-server-6867b74b74-cld8v" in "kube-system" namespace to be "Ready" ...
	E1007 12:02:37.179794   71483 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-475689" hosting pod "metrics-server-6867b74b74-cld8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:37.179801   71483 pod_ready.go:39] duration metric: took 1.290698534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:02:37.179818   71483 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:02:37.195368   71483 ops.go:34] apiserver oom_adj: -16
	I1007 12:02:37.195392   71483 kubeadm.go:597] duration metric: took 9.312340637s to restartPrimaryControlPlane
	I1007 12:02:37.195404   71483 kubeadm.go:394] duration metric: took 9.36077265s to StartCluster
	I1007 12:02:37.195474   71483 settings.go:142] acquiring lock: {Name:mk699f217216dbe513edf6a42c79fe85f8c20124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:02:37.195597   71483 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 12:02:37.198340   71483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/kubeconfig: {Name:mkc8a5ce1dbafe55e056433fff5c065506f83346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:02:37.198607   71483 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:02:37.198686   71483 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:02:37.198794   71483 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-475689"
	I1007 12:02:37.198817   71483 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-475689"
	W1007 12:02:37.198826   71483 addons.go:243] addon storage-provisioner should already be in state true
	I1007 12:02:37.198837   71483 addons.go:69] Setting metrics-server=true in profile "embed-certs-475689"
	I1007 12:02:37.198863   71483 addons.go:234] Setting addon metrics-server=true in "embed-certs-475689"
	I1007 12:02:37.198864   71483 host.go:66] Checking if "embed-certs-475689" exists ...
	I1007 12:02:37.198866   71483 config.go:182] Loaded profile config "embed-certs-475689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1007 12:02:37.198872   71483 addons.go:243] addon metrics-server should already be in state true
	I1007 12:02:37.198884   71483 addons.go:69] Setting default-storageclass=true in profile "embed-certs-475689"
	I1007 12:02:37.198904   71483 host.go:66] Checking if "embed-certs-475689" exists ...
	I1007 12:02:37.198921   71483 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-475689"
	I1007 12:02:37.199292   71483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:02:37.199302   71483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:02:37.199302   71483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:02:37.199333   71483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:02:37.199343   71483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:02:37.199407   71483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:02:37.200287   71483 out.go:177] * Verifying Kubernetes components...
	I1007 12:02:37.201882   71483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:02:37.215502   71483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I1007 12:02:37.215926   71483 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:02:37.216504   71483 main.go:141] libmachine: Using API Version  1
	I1007 12:02:37.216534   71483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:02:37.216980   71483 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:02:37.217197   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetState
	I1007 12:02:37.218311   71483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
	I1007 12:02:37.218724   71483 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:02:37.219254   71483 main.go:141] libmachine: Using API Version  1
	I1007 12:02:37.219278   71483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:02:37.219590   71483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I1007 12:02:37.219775   71483 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:02:37.220356   71483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:02:37.220394   71483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:02:37.220553   71483 addons.go:234] Setting addon default-storageclass=true in "embed-certs-475689"
	W1007 12:02:37.220571   71483 addons.go:243] addon default-storageclass should already be in state true
	I1007 12:02:37.220595   71483 host.go:66] Checking if "embed-certs-475689" exists ...
	I1007 12:02:37.220740   71483 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:02:37.220876   71483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:02:37.220905   71483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:02:37.221195   71483 main.go:141] libmachine: Using API Version  1
	I1007 12:02:37.221213   71483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:02:37.221530   71483 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:02:37.222024   71483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:02:37.222061   71483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:02:37.235965   71483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45357
	I1007 12:02:37.236589   71483 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:02:37.237122   71483 main.go:141] libmachine: Using API Version  1
	I1007 12:02:37.237139   71483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:02:37.237488   71483 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:02:37.237665   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetState
	I1007 12:02:37.239532   71483 main.go:141] libmachine: (embed-certs-475689) Calling .DriverName
	I1007 12:02:37.239936   71483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42671
	I1007 12:02:37.240077   71483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33487
	I1007 12:02:37.240311   71483 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:02:37.240465   71483 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:02:37.240901   71483 main.go:141] libmachine: Using API Version  1
	I1007 12:02:37.240929   71483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:02:37.240901   71483 main.go:141] libmachine: Using API Version  1
	I1007 12:02:37.240991   71483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:02:37.241359   71483 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:02:37.241407   71483 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:02:37.241622   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetState
	I1007 12:02:37.242053   71483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:02:37.242055   71483 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 12:02:37.242098   71483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:02:37.243419   71483 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 12:02:37.243439   71483 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 12:02:37.243474   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:37.243612   71483 main.go:141] libmachine: (embed-certs-475689) Calling .DriverName
	I1007 12:02:37.245157   71483 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:02:33.950372   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:34.449973   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:34.949820   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:35.449395   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:35.950153   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:36.449791   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:36.950298   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:37.449779   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:37.949807   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:38.449719   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:37.246344   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:37.246719   71483 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:02:37.246738   71483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:02:37.246756   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:37.246831   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:37.246856   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:37.246976   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHPort
	I1007 12:02:37.247143   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:37.247272   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHUsername
	I1007 12:02:37.247434   71483 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/embed-certs-475689/id_rsa Username:docker}
	I1007 12:02:37.250030   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:37.250484   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:37.250546   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:37.250709   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHPort
	I1007 12:02:37.250901   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:37.251087   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHUsername
	I1007 12:02:37.251203   71483 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/embed-certs-475689/id_rsa Username:docker}
	I1007 12:02:37.289132   71483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43897
	I1007 12:02:37.289571   71483 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:02:37.290241   71483 main.go:141] libmachine: Using API Version  1
	I1007 12:02:37.290275   71483 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:02:37.290724   71483 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:02:37.290920   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetState
	I1007 12:02:37.292859   71483 main.go:141] libmachine: (embed-certs-475689) Calling .DriverName
	I1007 12:02:37.293091   71483 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:02:37.293108   71483 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:02:37.293127   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHHostname
	I1007 12:02:37.296537   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:37.297019   71483 main.go:141] libmachine: (embed-certs-475689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:d2:45", ip: ""} in network mk-embed-certs-475689: {Iface:virbr4 ExpiryTime:2024-10-07 12:53:22 +0000 UTC Type:0 Mac:52:54:00:7e:d2:45 Iaid: IPaddr:192.168.50.37 Prefix:24 Hostname:embed-certs-475689 Clientid:01:52:54:00:7e:d2:45}
	I1007 12:02:37.297043   71483 main.go:141] libmachine: (embed-certs-475689) DBG | domain embed-certs-475689 has defined IP address 192.168.50.37 and MAC address 52:54:00:7e:d2:45 in network mk-embed-certs-475689
	I1007 12:02:37.297199   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHPort
	I1007 12:02:37.297381   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHKeyPath
	I1007 12:02:37.297534   71483 main.go:141] libmachine: (embed-certs-475689) Calling .GetSSHUsername
	I1007 12:02:37.297651   71483 sshutil.go:53] new ssh client: &{IP:192.168.50.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/embed-certs-475689/id_rsa Username:docker}
	I1007 12:02:37.449323   71483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:02:37.475130   71483 node_ready.go:35] waiting up to 6m0s for node "embed-certs-475689" to be "Ready" ...
	I1007 12:02:37.537767   71483 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 12:02:37.537791   71483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 12:02:37.548914   71483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:02:37.562756   71483 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 12:02:37.562804   71483 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 12:02:37.586728   71483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:02:37.640996   71483 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:02:37.641029   71483 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 12:02:37.695829   71483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:02:38.884135   71483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.297370967s)
	I1007 12:02:38.884186   71483 main.go:141] libmachine: Making call to close driver server
	I1007 12:02:38.884200   71483 main.go:141] libmachine: (embed-certs-475689) Calling .Close
	I1007 12:02:38.884226   71483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.335275365s)
	I1007 12:02:38.884262   71483 main.go:141] libmachine: Making call to close driver server
	I1007 12:02:38.884274   71483 main.go:141] libmachine: (embed-certs-475689) Calling .Close
	I1007 12:02:38.884607   71483 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:02:38.884618   71483 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:02:38.884623   71483 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:02:38.884629   71483 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:02:38.884632   71483 main.go:141] libmachine: Making call to close driver server
	I1007 12:02:38.884638   71483 main.go:141] libmachine: Making call to close driver server
	I1007 12:02:38.884648   71483 main.go:141] libmachine: (embed-certs-475689) Calling .Close
	I1007 12:02:38.884640   71483 main.go:141] libmachine: (embed-certs-475689) Calling .Close
	I1007 12:02:38.884996   71483 main.go:141] libmachine: (embed-certs-475689) DBG | Closing plugin on server side
	I1007 12:02:38.885012   71483 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:02:38.885027   71483 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:02:38.885031   71483 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:02:38.885039   71483 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:02:38.893183   71483 main.go:141] libmachine: Making call to close driver server
	I1007 12:02:38.893200   71483 main.go:141] libmachine: (embed-certs-475689) Calling .Close
	I1007 12:02:38.893469   71483 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:02:38.893485   71483 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:02:38.942321   71483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246444708s)
	I1007 12:02:38.942381   71483 main.go:141] libmachine: Making call to close driver server
	I1007 12:02:38.942396   71483 main.go:141] libmachine: (embed-certs-475689) Calling .Close
	I1007 12:02:38.942683   71483 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:02:38.942706   71483 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:02:38.942715   71483 main.go:141] libmachine: (embed-certs-475689) DBG | Closing plugin on server side
	I1007 12:02:38.942720   71483 main.go:141] libmachine: Making call to close driver server
	I1007 12:02:38.942804   71483 main.go:141] libmachine: (embed-certs-475689) Calling .Close
	I1007 12:02:38.943025   71483 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:02:38.943035   71483 main.go:141] libmachine: (embed-certs-475689) DBG | Closing plugin on server side
	I1007 12:02:38.943039   71483 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:02:38.943052   71483 addons.go:475] Verifying addon metrics-server=true in "embed-certs-475689"
	I1007 12:02:38.945071   71483 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1007 12:02:35.390306   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:37.391197   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:39.887284   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:38.949430   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:39.449594   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:39.950126   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:40.449661   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:40.949938   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:41.450344   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:41.950388   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:42.450140   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:42.950016   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:43.450363   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:38.946577   71483 addons.go:510] duration metric: took 1.747899148s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1007 12:02:39.479397   71483 node_ready.go:53] node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:41.979663   71483 node_ready.go:53] node "embed-certs-475689" has status "Ready":"False"
	I1007 12:02:41.889827   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:44.388341   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:43.950432   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:44.450325   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:44.949426   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:45.450299   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:45.950353   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:46.450461   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:46.949760   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:47.450274   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:47.949393   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:48.449637   72038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:02:44.478851   71483 node_ready.go:49] node "embed-certs-475689" has status "Ready":"True"
	I1007 12:02:44.478875   71483 node_ready.go:38] duration metric: took 7.003711946s for node "embed-certs-475689" to be "Ready" ...
	I1007 12:02:44.478886   71483 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:02:44.484820   71483 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vtxt8" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:44.495557   71483 pod_ready.go:93] pod "coredns-7c65d6cfc9-vtxt8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:02:44.495578   71483 pod_ready.go:82] duration metric: took 10.732127ms for pod "coredns-7c65d6cfc9-vtxt8" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:44.495589   71483 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:44.501333   71483 pod_ready.go:93] pod "etcd-embed-certs-475689" in "kube-system" namespace has status "Ready":"True"
	I1007 12:02:44.501355   71483 pod_ready.go:82] duration metric: took 5.760558ms for pod "etcd-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:44.501364   71483 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:44.506975   71483 pod_ready.go:93] pod "kube-apiserver-embed-certs-475689" in "kube-system" namespace has status "Ready":"True"
	I1007 12:02:44.506999   71483 pod_ready.go:82] duration metric: took 5.628166ms for pod "kube-apiserver-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:44.507010   71483 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:44.512434   71483 pod_ready.go:93] pod "kube-controller-manager-embed-certs-475689" in "kube-system" namespace has status "Ready":"True"
	I1007 12:02:44.512457   71483 pod_ready.go:82] duration metric: took 5.438974ms for pod "kube-controller-manager-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:44.512469   71483 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6l84n" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:44.880336   71483 pod_ready.go:93] pod "kube-proxy-6l84n" in "kube-system" namespace has status "Ready":"True"
	I1007 12:02:44.880366   71483 pod_ready.go:82] duration metric: took 367.888664ms for pod "kube-proxy-6l84n" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:44.880379   71483 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:46.480460   71483 pod_ready.go:93] pod "kube-scheduler-embed-certs-475689" in "kube-system" namespace has status "Ready":"True"
	I1007 12:02:46.480483   71483 pod_ready.go:82] duration metric: took 1.60009592s for pod "kube-scheduler-embed-certs-475689" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:46.480493   71483 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-cld8v" in "kube-system" namespace to be "Ready" ...
	I1007 12:02:48.487160   71483 pod_ready.go:103] pod "metrics-server-6867b74b74-cld8v" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:46.388941   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:48.888372   71606 pod_ready.go:103] pod "metrics-server-6867b74b74-kg48w" in "kube-system" namespace has status "Ready":"False"
	I1007 12:02:51.525746   58399 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000430342s
	I1007 12:02:51.525766   58399 kubeadm.go:310] 
	I1007 12:02:51.525801   58399 kubeadm.go:310] Unfortunately, an error has occurred:
	I1007 12:02:51.525824   58399 kubeadm.go:310] 	context deadline exceeded
	I1007 12:02:51.525827   58399 kubeadm.go:310] 
	I1007 12:02:51.525855   58399 kubeadm.go:310] This error is likely caused by:
	I1007 12:02:51.525880   58399 kubeadm.go:310] 	- The kubelet is not running
	I1007 12:02:51.525963   58399 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 12:02:51.525968   58399 kubeadm.go:310] 
	I1007 12:02:51.526089   58399 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 12:02:51.526127   58399 kubeadm.go:310] 	- 'systemctl status kubelet'
	I1007 12:02:51.526172   58399 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I1007 12:02:51.526186   58399 kubeadm.go:310] 
	I1007 12:02:51.526282   58399 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 12:02:51.526359   58399 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 12:02:51.526439   58399 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1007 12:02:51.526519   58399 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 12:02:51.526584   58399 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I1007 12:02:51.526648   58399 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1007 12:02:51.528147   58399 kubeadm.go:310] W1007 11:58:49.180041   10657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:02:51.528459   58399 kubeadm.go:310] W1007 11:58:49.180845   10657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:02:51.528552   58399 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 12:02:51.528625   58399 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I1007 12:02:51.528681   58399 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 12:02:51.528729   58399 kubeadm.go:394] duration metric: took 12m9.907727386s to StartCluster
	I1007 12:02:51.528760   58399 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:02:51.528802   58399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:02:51.574296   58399 cri.go:89] found id: ""
	I1007 12:02:51.574310   58399 logs.go:282] 0 containers: []
	W1007 12:02:51.574316   58399 logs.go:284] No container was found matching "kube-apiserver"
	I1007 12:02:51.574321   58399 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 12:02:51.574368   58399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:02:51.612436   58399 cri.go:89] found id: "5a6cfd18c9d8cbe8ca36718d1d86082a9d9573d6e9eee0e3264a5be3c4730ad9"
	I1007 12:02:51.612446   58399 cri.go:89] found id: ""
	I1007 12:02:51.612451   58399 logs.go:282] 1 containers: [5a6cfd18c9d8cbe8ca36718d1d86082a9d9573d6e9eee0e3264a5be3c4730ad9]
	I1007 12:02:51.612496   58399 ssh_runner.go:195] Run: which crictl
	I1007 12:02:51.617369   58399 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 12:02:51.617420   58399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:02:51.655753   58399 cri.go:89] found id: ""
	I1007 12:02:51.655770   58399 logs.go:282] 0 containers: []
	W1007 12:02:51.655778   58399 logs.go:284] No container was found matching "coredns"
	I1007 12:02:51.655784   58399 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:02:51.655839   58399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:02:51.693702   58399 cri.go:89] found id: "bbbd9e65540f6ac1495009985b22189238177b02d729bb4a83978a76dff93fd7"
	I1007 12:02:51.693712   58399 cri.go:89] found id: ""
	I1007 12:02:51.693718   58399 logs.go:282] 1 containers: [bbbd9e65540f6ac1495009985b22189238177b02d729bb4a83978a76dff93fd7]
	I1007 12:02:51.693763   58399 ssh_runner.go:195] Run: which crictl
	I1007 12:02:51.699676   58399 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:02:51.699724   58399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:02:51.738185   58399 cri.go:89] found id: ""
	I1007 12:02:51.738197   58399 logs.go:282] 0 containers: []
	W1007 12:02:51.738203   58399 logs.go:284] No container was found matching "kube-proxy"
	I1007 12:02:51.738208   58399 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:02:51.738257   58399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:02:51.778078   58399 cri.go:89] found id: "5e8e46c3c5b989d1342a4c62dcb4e7f0f057cd539b6233d93d1062ae385537e5"
	I1007 12:02:51.778088   58399 cri.go:89] found id: "c23b68eafbfe116c1b762e4ee68d3857203f7bf597887f950a97dc9ab630c202"
	I1007 12:02:51.778090   58399 cri.go:89] found id: ""
	I1007 12:02:51.778096   58399 logs.go:282] 2 containers: [5e8e46c3c5b989d1342a4c62dcb4e7f0f057cd539b6233d93d1062ae385537e5 c23b68eafbfe116c1b762e4ee68d3857203f7bf597887f950a97dc9ab630c202]
	I1007 12:02:51.778139   58399 ssh_runner.go:195] Run: which crictl
	I1007 12:02:51.782645   58399 ssh_runner.go:195] Run: which crictl
	I1007 12:02:51.787243   58399 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 12:02:51.787289   58399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:02:51.824359   58399 cri.go:89] found id: ""
	I1007 12:02:51.824379   58399 logs.go:282] 0 containers: []
	W1007 12:02:51.824385   58399 logs.go:284] No container was found matching "kindnet"
	I1007 12:02:51.824389   58399 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1007 12:02:51.824437   58399 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1007 12:02:51.862662   58399 cri.go:89] found id: ""
	I1007 12:02:51.862677   58399 logs.go:282] 0 containers: []
	W1007 12:02:51.862683   58399 logs.go:284] No container was found matching "storage-provisioner"
	I1007 12:02:51.862695   58399 logs.go:123] Gathering logs for kube-scheduler [bbbd9e65540f6ac1495009985b22189238177b02d729bb4a83978a76dff93fd7] ...
	I1007 12:02:51.862708   58399 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbbd9e65540f6ac1495009985b22189238177b02d729bb4a83978a76dff93fd7"
	I1007 12:02:51.952214   58399 logs.go:123] Gathering logs for kube-controller-manager [c23b68eafbfe116c1b762e4ee68d3857203f7bf597887f950a97dc9ab630c202] ...
	I1007 12:02:51.952232   58399 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c23b68eafbfe116c1b762e4ee68d3857203f7bf597887f950a97dc9ab630c202"
	I1007 12:02:51.997635   58399 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:02:51.997649   58399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 12:02:52.089879   58399 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 12:02:52.089890   58399 logs.go:123] Gathering logs for dmesg ...
	I1007 12:02:52.089901   58399 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:02:52.108154   58399 logs.go:123] Gathering logs for etcd [5a6cfd18c9d8cbe8ca36718d1d86082a9d9573d6e9eee0e3264a5be3c4730ad9] ...
	I1007 12:02:52.108169   58399 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a6cfd18c9d8cbe8ca36718d1d86082a9d9573d6e9eee0e3264a5be3c4730ad9"
	I1007 12:02:52.157435   58399 logs.go:123] Gathering logs for kube-controller-manager [5e8e46c3c5b989d1342a4c62dcb4e7f0f057cd539b6233d93d1062ae385537e5] ...
	I1007 12:02:52.157449   58399 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e8e46c3c5b989d1342a4c62dcb4e7f0f057cd539b6233d93d1062ae385537e5"
	I1007 12:02:52.193921   58399 logs.go:123] Gathering logs for CRI-O ...
	I1007 12:02:52.193935   58399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 12:02:52.422949   58399 logs.go:123] Gathering logs for container status ...
	I1007 12:02:52.422969   58399 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:02:52.478407   58399 logs.go:123] Gathering logs for kubelet ...
	I1007 12:02:52.478426   58399 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 12:02:52.629867   58399 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.006247417s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000430342s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W1007 11:58:49.180041   10657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W1007 11:58:49.180845   10657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1007 12:02:52.629916   58399 out.go:270] * 
	W1007 12:02:52.629989   58399 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.006247417s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000430342s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W1007 11:58:49.180041   10657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W1007 11:58:49.180845   10657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 12:02:52.630002   58399 out.go:270] * 
	W1007 12:02:52.630770   58399 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 12:02:52.633596   58399 out.go:201] 
	W1007 12:02:52.634844   58399 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.006247417s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000430342s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W1007 11:58:49.180041   10657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W1007 11:58:49.180845   10657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 12:02:52.634899   58399 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1007 12:02:52.634923   58399 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1007 12:02:52.636323   58399 out.go:201] 
	
	
	==> CRI-O <==
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.340725834Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302573340698956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52c6bd96-b71b-4878-9c65-9d073023889f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.341472222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50972c3a-0156-4356-88e3-180049094455 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.341523782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50972c3a-0156-4356-88e3-180049094455 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.341637387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e8e46c3c5b989d1342a4c62dcb4e7f0f057cd539b6233d93d1062ae385537e5,PodSandboxId:63ae5d61da9c17e4f157014f28a4c5892b2064dcbca93ea82bd7206d20cd52a6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:18,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728302561045018833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-658191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cde27ebafb0fe298fbdedf14354230,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.contain
er.restartCount: 18,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6cfd18c9d8cbe8ca36718d1d86082a9d9573d6e9eee0e3264a5be3c4730ad9,PodSandboxId:3c862e12fdc6ea9f0fa3590c27a8026b799cbca79ce611115238c63ce5ff7187,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302331720931560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-expiration-658191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2e0097881e703e0f00ba4074d51a1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbd9e65540f6ac1495009985b22189238177b02d729bb4a83978a76dff93fd7,PodSandboxId:141a230ab9878cf53ab07a0b82d08e09cb6ad1a7911717526cc8af58a984a655,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302331634283374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-658191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db175078ea2669e2b3765d1673d10a20,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50972c3a-0156-4356-88e3-180049094455 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.383520455Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74585430-a3db-4677-8f43-248730fa8e57 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.383589727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74585430-a3db-4677-8f43-248730fa8e57 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.384800493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c6ed078-db57-4d3a-ba19-aaa85c7cf5a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.386314859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302573386161319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c6ed078-db57-4d3a-ba19-aaa85c7cf5a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.389122964Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b54b9e4-9437-4729-801d-b098e5cb4d7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.389188776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b54b9e4-9437-4729-801d-b098e5cb4d7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.389277840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e8e46c3c5b989d1342a4c62dcb4e7f0f057cd539b6233d93d1062ae385537e5,PodSandboxId:63ae5d61da9c17e4f157014f28a4c5892b2064dcbca93ea82bd7206d20cd52a6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:18,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728302561045018833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-658191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cde27ebafb0fe298fbdedf14354230,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.contain
er.restartCount: 18,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6cfd18c9d8cbe8ca36718d1d86082a9d9573d6e9eee0e3264a5be3c4730ad9,PodSandboxId:3c862e12fdc6ea9f0fa3590c27a8026b799cbca79ce611115238c63ce5ff7187,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302331720931560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-expiration-658191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2e0097881e703e0f00ba4074d51a1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbd9e65540f6ac1495009985b22189238177b02d729bb4a83978a76dff93fd7,PodSandboxId:141a230ab9878cf53ab07a0b82d08e09cb6ad1a7911717526cc8af58a984a655,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302331634283374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-658191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db175078ea2669e2b3765d1673d10a20,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b54b9e4-9437-4729-801d-b098e5cb4d7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.425938685Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=875478b5-9375-42e7-a8a0-65edab6bc191 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.426011577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=875478b5-9375-42e7-a8a0-65edab6bc191 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.427353166Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e2100dc-0f9b-4a67-acf2-3ca797327efe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.427759094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302573427731679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e2100dc-0f9b-4a67-acf2-3ca797327efe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.428519587Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ddffc0c-a48b-45e2-b983-a42e9506f992 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.428591372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ddffc0c-a48b-45e2-b983-a42e9506f992 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.428686674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e8e46c3c5b989d1342a4c62dcb4e7f0f057cd539b6233d93d1062ae385537e5,PodSandboxId:63ae5d61da9c17e4f157014f28a4c5892b2064dcbca93ea82bd7206d20cd52a6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:18,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728302561045018833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-658191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cde27ebafb0fe298fbdedf14354230,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.contain
er.restartCount: 18,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6cfd18c9d8cbe8ca36718d1d86082a9d9573d6e9eee0e3264a5be3c4730ad9,PodSandboxId:3c862e12fdc6ea9f0fa3590c27a8026b799cbca79ce611115238c63ce5ff7187,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302331720931560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-expiration-658191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2e0097881e703e0f00ba4074d51a1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbd9e65540f6ac1495009985b22189238177b02d729bb4a83978a76dff93fd7,PodSandboxId:141a230ab9878cf53ab07a0b82d08e09cb6ad1a7911717526cc8af58a984a655,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302331634283374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-658191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db175078ea2669e2b3765d1673d10a20,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ddffc0c-a48b-45e2-b983-a42e9506f992 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.461651730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7c808df-e70e-4757-8e36-25c2433caaab name=/runtime.v1.RuntimeService/Version
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.461721581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7c808df-e70e-4757-8e36-25c2433caaab name=/runtime.v1.RuntimeService/Version
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.468406861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ffbe50b-c9c3-4c6a-a5f7-fe83b20665b6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.468846105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302573468743729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ffbe50b-c9c3-4c6a-a5f7-fe83b20665b6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.469568700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86b7e06a-c5db-4de4-bfa7-f09931c421c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.469624062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86b7e06a-c5db-4de4-bfa7-f09931c421c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:02:53 cert-expiration-658191 crio[2919]: time="2024-10-07 12:02:53.469712507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e8e46c3c5b989d1342a4c62dcb4e7f0f057cd539b6233d93d1062ae385537e5,PodSandboxId:63ae5d61da9c17e4f157014f28a4c5892b2064dcbca93ea82bd7206d20cd52a6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:18,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728302561045018833,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-658191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32cde27ebafb0fe298fbdedf14354230,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.contain
er.restartCount: 18,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6cfd18c9d8cbe8ca36718d1d86082a9d9573d6e9eee0e3264a5be3c4730ad9,PodSandboxId:3c862e12fdc6ea9f0fa3590c27a8026b799cbca79ce611115238c63ce5ff7187,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302331720931560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-cert-expiration-658191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b2e0097881e703e0f00ba4074d51a1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbbd9e65540f6ac1495009985b22189238177b02d729bb4a83978a76dff93fd7,PodSandboxId:141a230ab9878cf53ab07a0b82d08e09cb6ad1a7911717526cc8af58a984a655,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302331634283374,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-658191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db175078ea2669e2b3765d1673d10a20,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86b7e06a-c5db-4de4-bfa7-f09931c421c3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5e8e46c3c5b98       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   12 seconds ago      Exited              kube-controller-manager   18                  63ae5d61da9c1       kube-controller-manager-cert-expiration-658191
	5a6cfd18c9d8c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   4 minutes ago       Running             etcd                      4                   3c862e12fdc6e       etcd-cert-expiration-658191
	bbbd9e65540f6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   4 minutes ago       Running             kube-scheduler            4                   141a230ab9878       kube-scheduler-cert-expiration-658191
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.217069] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.128486] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.296574] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.277076] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +0.062273] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.401903] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +1.094970] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.490246] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +0.084067] kauditd_printk_skb: 30 callbacks suppressed
	[  +1.229395] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +4.158997] kauditd_printk_skb: 49 callbacks suppressed
	[Oct 7 11:46] kauditd_printk_skb: 57 callbacks suppressed
	[Oct 7 11:49] systemd-fstab-generator[2637]: Ignoring "noauto" option for root device
	[  +0.219345] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.311217] systemd-fstab-generator[2707]: Ignoring "noauto" option for root device
	[  +0.242519] systemd-fstab-generator[2725]: Ignoring "noauto" option for root device
	[  +0.394104] systemd-fstab-generator[2755]: Ignoring "noauto" option for root device
	[Oct 7 11:50] systemd-fstab-generator[3036]: Ignoring "noauto" option for root device
	[  +0.105069] kauditd_printk_skb: 180 callbacks suppressed
	[  +3.820388] systemd-fstab-generator[3476]: Ignoring "noauto" option for root device
	[ +12.525511] kauditd_printk_skb: 108 callbacks suppressed
	[Oct 7 11:54] systemd-fstab-generator[9902]: Ignoring "noauto" option for root device
	[ +12.603811] kauditd_printk_skb: 68 callbacks suppressed
	[Oct 7 11:58] systemd-fstab-generator[10682]: Ignoring "noauto" option for root device
	[Oct 7 11:59] kauditd_printk_skb: 60 callbacks suppressed
	
	
	==> etcd [5a6cfd18c9d8cbe8ca36718d1d86082a9d9573d6e9eee0e3264a5be3c4730ad9] <==
	{"level":"info","ts":"2024-10-07T11:58:52.036960Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-07T11:58:52.037378Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4e5e32f94c376694","initial-advertise-peer-urls":["https://192.168.61.15:2380"],"listen-peer-urls":["https://192.168.61.15:2380"],"advertise-client-urls":["https://192.168.61.15:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.15:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-07T11:58:52.037454Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-07T11:58:52.037544Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.15:2380"}
	{"level":"info","ts":"2024-10-07T11:58:52.037567Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.15:2380"}
	{"level":"info","ts":"2024-10-07T11:58:52.177162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-07T11:58:52.177217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-07T11:58:52.177232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 received MsgPreVoteResp from 4e5e32f94c376694 at term 1"}
	{"level":"info","ts":"2024-10-07T11:58:52.177255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 became candidate at term 2"}
	{"level":"info","ts":"2024-10-07T11:58:52.177261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 received MsgVoteResp from 4e5e32f94c376694 at term 2"}
	{"level":"info","ts":"2024-10-07T11:58:52.177274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 became leader at term 2"}
	{"level":"info","ts":"2024-10-07T11:58:52.177281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e5e32f94c376694 elected leader 4e5e32f94c376694 at term 2"}
	{"level":"info","ts":"2024-10-07T11:58:52.182332Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:58:52.184512Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4e5e32f94c376694","local-member-attributes":"{Name:cert-expiration-658191 ClientURLs:[https://192.168.61.15:2379]}","request-path":"/0/members/4e5e32f94c376694/attributes","cluster-id":"cec272b56a0b2be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T11:58:52.184649Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:58:52.184957Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:58:52.185186Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cec272b56a0b2be","local-member-id":"4e5e32f94c376694","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:58:52.185269Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:58:52.185311Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:58:52.185888Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:58:52.188710Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.15:2379"}
	{"level":"info","ts":"2024-10-07T11:58:52.189587Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:58:52.192307Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T11:58:52.202107Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T11:58:52.202178Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:02:53 up 17 min,  0 users,  load average: 0.14, 0.16, 0.13
	Linux cert-expiration-658191 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-controller-manager [5e8e46c3c5b989d1342a4c62dcb4e7f0f057cd539b6233d93d1062ae385537e5] <==
	I1007 12:02:41.718212       1 serving.go:386] Generated self-signed cert in-memory
	I1007 12:02:42.145470       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1007 12:02:42.145568       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:02:42.147476       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1007 12:02:42.147647       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1007 12:02:42.148207       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1007 12:02:42.148297       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1007 12:02:52.150899       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.61.15:8443/healthz\": dial tcp 192.168.61.15:8443: connect: connection refused"
	
	
	==> kube-scheduler [bbbd9e65540f6ac1495009985b22189238177b02d729bb4a83978a76dff93fd7] <==
	E1007 12:02:23.585985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.61.15:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:02:27.844004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.61.15:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	E1007 12:02:27.844176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.61.15:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:02:29.919347       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.61.15:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	E1007 12:02:29.919409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.61.15:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:02:32.069342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.61.15:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	E1007 12:02:32.069424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.61.15:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:02:32.458230       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.15:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	E1007 12:02:32.458315       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.61.15:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:02:33.831665       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.61.15:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	E1007 12:02:33.831772       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.61.15:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:02:34.765307       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.61.15:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	E1007 12:02:34.765403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.61.15:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:02:39.926504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.15:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	E1007 12:02:39.926621       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.61.15:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:02:42.737294       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.61.15:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	E1007 12:02:42.737443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.61.15:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:02:44.002567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.15:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	E1007 12:02:44.002632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.61.15:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:02:45.289824       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.61.15:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	E1007 12:02:45.289899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.61.15:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:02:46.158549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.61.15:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	E1007 12:02:46.158607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.61.15:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:02:49.397630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.61.15:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	E1007 12:02:49.397720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.61.15:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Oct 07 12:02:40 cert-expiration-658191 kubelet[10689]: I1007 12:02:40.906206   10689 kubelet_node_status.go:72] "Attempting to register node" node="cert-expiration-658191"
	Oct 07 12:02:40 cert-expiration-658191 kubelet[10689]: E1007 12:02:40.907393   10689 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.15:8443: connect: connection refused" node="cert-expiration-658191"
	Oct 07 12:02:41 cert-expiration-658191 kubelet[10689]: I1007 12:02:41.029538   10689 scope.go:117] "RemoveContainer" containerID="c23b68eafbfe116c1b762e4ee68d3857203f7bf597887f950a97dc9ab630c202"
	Oct 07 12:02:41 cert-expiration-658191 kubelet[10689]: W1007 12:02:41.064957   10689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.15:8443: connect: connection refused
	Oct 07 12:02:41 cert-expiration-658191 kubelet[10689]: E1007 12:02:41.065118   10689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.61.15:8443: connect: connection refused" logger="UnhandledError"
	Oct 07 12:02:41 cert-expiration-658191 kubelet[10689]: E1007 12:02:41.122239   10689 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302561121406626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:02:41 cert-expiration-658191 kubelet[10689]: E1007 12:02:41.122278   10689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302561121406626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:02:42 cert-expiration-658191 kubelet[10689]: E1007 12:02:42.240438   10689 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.61.15:8443: connect: connection refused" event="&Event{ObjectMeta:{cert-expiration-658191.17fc29d02b1556c6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:cert-expiration-658191,UID:cert-expiration-658191,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node cert-expiration-658191 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:cert-expiration-658191,},FirstTimestamp:2024-10-07 11:58:51.056182982 +0000 UTC m=+0.578079681,LastTimestamp:2024-10-07 11:58:51.056182982 +0000 UTC m=+0.578079681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:ku
belet,ReportingInstance:cert-expiration-658191,}"
	Oct 07 12:02:47 cert-expiration-658191 kubelet[10689]: E1007 12:02:47.690015   10689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-658191?timeout=10s\": dial tcp 192.168.61.15:8443: connect: connection refused" interval="7s"
	Oct 07 12:02:47 cert-expiration-658191 kubelet[10689]: I1007 12:02:47.909357   10689 kubelet_node_status.go:72] "Attempting to register node" node="cert-expiration-658191"
	Oct 07 12:02:47 cert-expiration-658191 kubelet[10689]: E1007 12:02:47.910597   10689 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.15:8443: connect: connection refused" node="cert-expiration-658191"
	Oct 07 12:02:51 cert-expiration-658191 kubelet[10689]: E1007 12:02:51.038717   10689 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-apiserver_kube-apiserver-cert-expiration-658191_kube-system_a2b4317e14e16e5a1a0dcfd2d39f8a48_1\" is already in use by 94acf5a92c9b295800f441e893079f0e7a62c415f88c7d6d53e7a130aa0fe544. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="5f0dc0af7a79f70b33bfe2605c57fdb7164f1b4cb6813e74ba3fc6c995aa141e"
	Oct 07 12:02:51 cert-expiration-658191 kubelet[10689]: E1007 12:02:51.039718   10689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.31.1,Command:[kube-apiserver --advertise-address=192.168.61.15 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minikube/certs/ca.crt --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key --kubelet-preferred-a
ddress-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/minikube/certs/sa.pub --service-account-signing-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/var/lib/minikube/certs/apiserver.crt --tls-private-key-file=/var/lib/minikube/certs/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {<nil>} 250m DecimalSI},},Claims:[]ResourceClaim{},
},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8443 },Host:192.168.61.15,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8443 },Host:192.168.61.15,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:15,
PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8443 },Host:192.168.61.15,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-cert-expiration-658191_kube-system(a2b4317e14e16e5a1a0dcfd2d39f8a48): CreateContainerError: the container name \"k8s_kube-apiserver_kube-apiserver-cert-expiration-658191_kube-system_a2b4317e14e16e5a1a0dcfd2d39f8a48_1\" is already in use by 94acf5a92c9b295800f441e893079f0e7a62c415f88c
7d6d53e7a130aa0fe544. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Oct 07 12:02:51 cert-expiration-658191 kubelet[10689]: E1007 12:02:51.041389   10689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"the container name \\\"k8s_kube-apiserver_kube-apiserver-cert-expiration-658191_kube-system_a2b4317e14e16e5a1a0dcfd2d39f8a48_1\\\" is already in use by 94acf5a92c9b295800f441e893079f0e7a62c415f88c7d6d53e7a130aa0fe544. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-apiserver-cert-expiration-658191" podUID="a2b4317e14e16e5a1a0dcfd2d39f8a48"
	Oct 07 12:02:51 cert-expiration-658191 kubelet[10689]: E1007 12:02:51.049671   10689 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:02:51 cert-expiration-658191 kubelet[10689]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:02:51 cert-expiration-658191 kubelet[10689]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:02:51 cert-expiration-658191 kubelet[10689]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:02:51 cert-expiration-658191 kubelet[10689]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:02:51 cert-expiration-658191 kubelet[10689]: E1007 12:02:51.128340   10689 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302571125692429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:02:51 cert-expiration-658191 kubelet[10689]: E1007 12:02:51.128370   10689 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302571125692429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:02:52 cert-expiration-658191 kubelet[10689]: E1007 12:02:52.242243   10689 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.61.15:8443: connect: connection refused" event="&Event{ObjectMeta:{cert-expiration-658191.17fc29d02b1556c6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:cert-expiration-658191,UID:cert-expiration-658191,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node cert-expiration-658191 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:cert-expiration-658191,},FirstTimestamp:2024-10-07 11:58:51.056182982 +0000 UTC m=+0.578079681,LastTimestamp:2024-10-07 11:58:51.056182982 +0000 UTC m=+0.578079681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:ku
belet,ReportingInstance:cert-expiration-658191,}"
	Oct 07 12:02:52 cert-expiration-658191 kubelet[10689]: I1007 12:02:52.841244   10689 scope.go:117] "RemoveContainer" containerID="c23b68eafbfe116c1b762e4ee68d3857203f7bf597887f950a97dc9ab630c202"
	Oct 07 12:02:52 cert-expiration-658191 kubelet[10689]: I1007 12:02:52.842613   10689 scope.go:117] "RemoveContainer" containerID="5e8e46c3c5b989d1342a4c62dcb4e7f0f057cd539b6233d93d1062ae385537e5"
	Oct 07 12:02:52 cert-expiration-658191 kubelet[10689]: E1007 12:02:52.842832   10689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-cert-expiration-658191_kube-system(32cde27ebafb0fe298fbdedf14354230)\"" pod="kube-system/kube-controller-manager-cert-expiration-658191" podUID="32cde27ebafb0fe298fbdedf14354230"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-658191 -n cert-expiration-658191
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-658191 -n cert-expiration-658191: exit status 2 (226.196588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "cert-expiration-658191" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-658191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-658191
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-658191: (1.03977462s)
--- FAIL: TestCertExpiration (1086.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 node stop m02 -v=7 --alsologtostderr
E1007 10:50:49.225252   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:51:30.187023   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406505 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.477570778s)

                                                
                                                
-- stdout --
	* Stopping node "ha-406505-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 10:50:48.387821   27750 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:50:48.387944   27750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:50:48.387952   27750 out.go:358] Setting ErrFile to fd 2...
	I1007 10:50:48.387956   27750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:50:48.388155   27750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:50:48.388397   27750 mustload.go:65] Loading cluster: ha-406505
	I1007 10:50:48.388744   27750 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:50:48.388759   27750 stop.go:39] StopHost: ha-406505-m02
	I1007 10:50:48.389113   27750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:50:48.389153   27750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:50:48.406027   27750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40311
	I1007 10:50:48.406528   27750 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:50:48.407151   27750 main.go:141] libmachine: Using API Version  1
	I1007 10:50:48.407177   27750 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:50:48.407529   27750 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:50:48.410213   27750 out.go:177] * Stopping node "ha-406505-m02"  ...
	I1007 10:50:48.411683   27750 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 10:50:48.411726   27750 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:50:48.411976   27750 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 10:50:48.412037   27750 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:50:48.415202   27750 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:50:48.415623   27750 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:50:48.415650   27750 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:50:48.415902   27750 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:50:48.416129   27750 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:50:48.416300   27750 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:50:48.416419   27750 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:50:48.503959   27750 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 10:50:48.559786   27750 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 10:50:48.617639   27750 main.go:141] libmachine: Stopping "ha-406505-m02"...
	I1007 10:50:48.617666   27750 main.go:141] libmachine: (ha-406505-m02) Calling .GetState
	I1007 10:50:48.619222   27750 main.go:141] libmachine: (ha-406505-m02) Calling .Stop
	I1007 10:50:48.622587   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 0/120
	I1007 10:50:49.623906   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 1/120
	I1007 10:50:50.625707   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 2/120
	I1007 10:50:51.627820   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 3/120
	I1007 10:50:52.629435   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 4/120
	I1007 10:50:53.631140   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 5/120
	I1007 10:50:54.632617   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 6/120
	I1007 10:50:55.634783   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 7/120
	I1007 10:50:56.636027   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 8/120
	I1007 10:50:57.637607   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 9/120
	I1007 10:50:58.639854   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 10/120
	I1007 10:50:59.641729   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 11/120
	I1007 10:51:00.643719   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 12/120
	I1007 10:51:01.646510   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 13/120
	I1007 10:51:02.647966   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 14/120
	I1007 10:51:03.650260   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 15/120
	I1007 10:51:04.651667   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 16/120
	I1007 10:51:05.653183   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 17/120
	I1007 10:51:06.654581   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 18/120
	I1007 10:51:07.656440   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 19/120
	I1007 10:51:08.658261   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 20/120
	I1007 10:51:09.659676   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 21/120
	I1007 10:51:10.661531   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 22/120
	I1007 10:51:11.663195   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 23/120
	I1007 10:51:12.664729   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 24/120
	I1007 10:51:13.666039   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 25/120
	I1007 10:51:14.667522   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 26/120
	I1007 10:51:15.669087   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 27/120
	I1007 10:51:16.670839   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 28/120
	I1007 10:51:17.672270   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 29/120
	I1007 10:51:18.674752   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 30/120
	I1007 10:51:19.676803   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 31/120
	I1007 10:51:20.678443   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 32/120
	I1007 10:51:21.680006   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 33/120
	I1007 10:51:22.681587   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 34/120
	I1007 10:51:23.683396   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 35/120
	I1007 10:51:24.684794   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 36/120
	I1007 10:51:25.686441   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 37/120
	I1007 10:51:26.687913   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 38/120
	I1007 10:51:27.689597   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 39/120
	I1007 10:51:28.691044   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 40/120
	I1007 10:51:29.692420   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 41/120
	I1007 10:51:30.694634   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 42/120
	I1007 10:51:31.696046   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 43/120
	I1007 10:51:32.697262   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 44/120
	I1007 10:51:33.699219   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 45/120
	I1007 10:51:34.700647   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 46/120
	I1007 10:51:35.702035   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 47/120
	I1007 10:51:36.703624   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 48/120
	I1007 10:51:37.704964   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 49/120
	I1007 10:51:38.707711   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 50/120
	I1007 10:51:39.709065   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 51/120
	I1007 10:51:40.710400   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 52/120
	I1007 10:51:41.712049   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 53/120
	I1007 10:51:42.713563   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 54/120
	I1007 10:51:43.715372   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 55/120
	I1007 10:51:44.717019   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 56/120
	I1007 10:51:45.718313   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 57/120
	I1007 10:51:46.719729   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 58/120
	I1007 10:51:47.721191   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 59/120
	I1007 10:51:48.723371   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 60/120
	I1007 10:51:49.725041   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 61/120
	I1007 10:51:50.726545   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 62/120
	I1007 10:51:51.727871   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 63/120
	I1007 10:51:52.729224   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 64/120
	I1007 10:51:53.730694   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 65/120
	I1007 10:51:54.732003   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 66/120
	I1007 10:51:55.733178   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 67/120
	I1007 10:51:56.735086   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 68/120
	I1007 10:51:57.736576   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 69/120
	I1007 10:51:58.738834   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 70/120
	I1007 10:51:59.740366   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 71/120
	I1007 10:52:00.742623   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 72/120
	I1007 10:52:01.744745   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 73/120
	I1007 10:52:02.746161   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 74/120
	I1007 10:52:03.748542   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 75/120
	I1007 10:52:04.750062   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 76/120
	I1007 10:52:05.752373   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 77/120
	I1007 10:52:06.753686   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 78/120
	I1007 10:52:07.754996   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 79/120
	I1007 10:52:08.756350   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 80/120
	I1007 10:52:09.757774   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 81/120
	I1007 10:52:10.759137   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 82/120
	I1007 10:52:11.760881   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 83/120
	I1007 10:52:12.762045   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 84/120
	I1007 10:52:13.764058   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 85/120
	I1007 10:52:14.765470   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 86/120
	I1007 10:52:15.766875   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 87/120
	I1007 10:52:16.768364   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 88/120
	I1007 10:52:17.770686   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 89/120
	I1007 10:52:18.772532   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 90/120
	I1007 10:52:19.774266   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 91/120
	I1007 10:52:20.775573   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 92/120
	I1007 10:52:21.778050   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 93/120
	I1007 10:52:22.779158   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 94/120
	I1007 10:52:23.781115   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 95/120
	I1007 10:52:24.782608   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 96/120
	I1007 10:52:25.783856   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 97/120
	I1007 10:52:26.785127   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 98/120
	I1007 10:52:27.786505   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 99/120
	I1007 10:52:28.788515   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 100/120
	I1007 10:52:29.789773   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 101/120
	I1007 10:52:30.791235   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 102/120
	I1007 10:52:31.792626   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 103/120
	I1007 10:52:32.794001   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 104/120
	I1007 10:52:33.796234   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 105/120
	I1007 10:52:34.798483   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 106/120
	I1007 10:52:35.799775   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 107/120
	I1007 10:52:36.801292   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 108/120
	I1007 10:52:37.802640   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 109/120
	I1007 10:52:38.804897   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 110/120
	I1007 10:52:39.806131   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 111/120
	I1007 10:52:40.807438   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 112/120
	I1007 10:52:41.808737   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 113/120
	I1007 10:52:42.810330   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 114/120
	I1007 10:52:43.812431   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 115/120
	I1007 10:52:44.814375   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 116/120
	I1007 10:52:45.816065   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 117/120
	I1007 10:52:46.817450   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 118/120
	I1007 10:52:47.819149   27750 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 119/120
	I1007 10:52:48.819872   27750 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1007 10:52:48.819999   27750 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-406505 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr
E1007 10:52:52.109845   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr: (18.747850055s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406505 -n ha-406505
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406505 logs -n 25: (1.605127225s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505:/home/docker/cp-test_ha-406505-m03_ha-406505.txt                       |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505 sudo cat                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505.txt                                 |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m04 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp testdata/cp-test.txt                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505:/home/docker/cp-test_ha-406505-m04_ha-406505.txt                       |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505 sudo cat                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505.txt                                 |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03:/home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m03 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-406505 node stop m02 -v=7                                                     | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:46:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:46:00.685163   23621 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:46:00.685349   23621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:46:00.685361   23621 out.go:358] Setting ErrFile to fd 2...
	I1007 10:46:00.685369   23621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:46:00.685896   23621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:46:00.686526   23621 out.go:352] Setting JSON to false
	I1007 10:46:00.687357   23621 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1655,"bootTime":1728296306,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:46:00.687449   23621 start.go:139] virtualization: kvm guest
	I1007 10:46:00.689739   23621 out.go:177] * [ha-406505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:46:00.691129   23621 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:46:00.691156   23621 notify.go:220] Checking for updates...
	I1007 10:46:00.693697   23621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:46:00.695072   23621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:46:00.696501   23621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:00.697726   23621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:46:00.698926   23621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:46:00.700212   23621 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:46:00.737316   23621 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 10:46:00.738839   23621 start.go:297] selected driver: kvm2
	I1007 10:46:00.738857   23621 start.go:901] validating driver "kvm2" against <nil>
	I1007 10:46:00.738870   23621 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:46:00.739587   23621 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:46:00.739673   23621 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:46:00.755165   23621 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:46:00.755211   23621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 10:46:00.755442   23621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:46:00.755469   23621 cni.go:84] Creating CNI manager for ""
	I1007 10:46:00.755509   23621 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 10:46:00.755520   23621 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 10:46:00.755574   23621 start.go:340] cluster config:
	{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1007 10:46:00.755686   23621 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:46:00.757513   23621 out.go:177] * Starting "ha-406505" primary control-plane node in "ha-406505" cluster
	I1007 10:46:00.758765   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:46:00.758805   23621 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 10:46:00.758823   23621 cache.go:56] Caching tarball of preloaded images
	I1007 10:46:00.758896   23621 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:46:00.758906   23621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:46:00.759224   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:00.759245   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json: {Name:mk9b03e101af877bc71d822d951dd0373d9dda34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:00.759379   23621 start.go:360] acquireMachinesLock for ha-406505: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:46:00.759405   23621 start.go:364] duration metric: took 14.549µs to acquireMachinesLock for "ha-406505"
	I1007 10:46:00.759421   23621 start.go:93] Provisioning new machine with config: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:46:00.759479   23621 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 10:46:00.761273   23621 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 10:46:00.761420   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:00.761466   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:00.775977   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35573
	I1007 10:46:00.776393   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:00.776945   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:00.776968   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:00.777275   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:00.777446   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:00.777589   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:00.777737   23621 start.go:159] libmachine.API.Create for "ha-406505" (driver="kvm2")
	I1007 10:46:00.777767   23621 client.go:168] LocalClient.Create starting
	I1007 10:46:00.777806   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:46:00.777846   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:00.777867   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:00.777925   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:46:00.777949   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:00.777966   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:00.777989   23621 main.go:141] libmachine: Running pre-create checks...
	I1007 10:46:00.778000   23621 main.go:141] libmachine: (ha-406505) Calling .PreCreateCheck
	I1007 10:46:00.778317   23621 main.go:141] libmachine: (ha-406505) Calling .GetConfigRaw
	I1007 10:46:00.778644   23621 main.go:141] libmachine: Creating machine...
	I1007 10:46:00.778656   23621 main.go:141] libmachine: (ha-406505) Calling .Create
	I1007 10:46:00.778771   23621 main.go:141] libmachine: (ha-406505) Creating KVM machine...
	I1007 10:46:00.779972   23621 main.go:141] libmachine: (ha-406505) DBG | found existing default KVM network
	I1007 10:46:00.780650   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:00.780522   23644 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a50}
	I1007 10:46:00.780693   23621 main.go:141] libmachine: (ha-406505) DBG | created network xml: 
	I1007 10:46:00.780713   23621 main.go:141] libmachine: (ha-406505) DBG | <network>
	I1007 10:46:00.780722   23621 main.go:141] libmachine: (ha-406505) DBG |   <name>mk-ha-406505</name>
	I1007 10:46:00.780732   23621 main.go:141] libmachine: (ha-406505) DBG |   <dns enable='no'/>
	I1007 10:46:00.780741   23621 main.go:141] libmachine: (ha-406505) DBG |   
	I1007 10:46:00.780752   23621 main.go:141] libmachine: (ha-406505) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 10:46:00.780763   23621 main.go:141] libmachine: (ha-406505) DBG |     <dhcp>
	I1007 10:46:00.780774   23621 main.go:141] libmachine: (ha-406505) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 10:46:00.780793   23621 main.go:141] libmachine: (ha-406505) DBG |     </dhcp>
	I1007 10:46:00.780806   23621 main.go:141] libmachine: (ha-406505) DBG |   </ip>
	I1007 10:46:00.780813   23621 main.go:141] libmachine: (ha-406505) DBG |   
	I1007 10:46:00.780820   23621 main.go:141] libmachine: (ha-406505) DBG | </network>
	I1007 10:46:00.780827   23621 main.go:141] libmachine: (ha-406505) DBG | 
	I1007 10:46:00.785975   23621 main.go:141] libmachine: (ha-406505) DBG | trying to create private KVM network mk-ha-406505 192.168.39.0/24...
	I1007 10:46:00.849882   23621 main.go:141] libmachine: (ha-406505) DBG | private KVM network mk-ha-406505 192.168.39.0/24 created
	I1007 10:46:00.849911   23621 main.go:141] libmachine: (ha-406505) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505 ...
	I1007 10:46:00.849973   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:00.849860   23644 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:00.850002   23621 main.go:141] libmachine: (ha-406505) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:46:00.850027   23621 main.go:141] libmachine: (ha-406505) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:46:01.096727   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:01.096588   23644 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa...
	I1007 10:46:01.205683   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:01.205510   23644 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/ha-406505.rawdisk...
	I1007 10:46:01.205717   23621 main.go:141] libmachine: (ha-406505) DBG | Writing magic tar header
	I1007 10:46:01.205736   23621 main.go:141] libmachine: (ha-406505) DBG | Writing SSH key tar header
	I1007 10:46:01.205745   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:01.205639   23644 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505 ...
	I1007 10:46:01.205758   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505
	I1007 10:46:01.205765   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:46:01.205774   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505 (perms=drwx------)
	I1007 10:46:01.205782   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:46:01.205789   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:46:01.205799   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:01.205809   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:46:01.205820   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:46:01.205825   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:46:01.205832   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:46:01.205838   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home
	I1007 10:46:01.205845   23621 main.go:141] libmachine: (ha-406505) DBG | Skipping /home - not owner
	I1007 10:46:01.205854   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:46:01.205860   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:46:01.205868   23621 main.go:141] libmachine: (ha-406505) Creating domain...
	I1007 10:46:01.207028   23621 main.go:141] libmachine: (ha-406505) define libvirt domain using xml: 
	I1007 10:46:01.207069   23621 main.go:141] libmachine: (ha-406505) <domain type='kvm'>
	I1007 10:46:01.207077   23621 main.go:141] libmachine: (ha-406505)   <name>ha-406505</name>
	I1007 10:46:01.207082   23621 main.go:141] libmachine: (ha-406505)   <memory unit='MiB'>2200</memory>
	I1007 10:46:01.207087   23621 main.go:141] libmachine: (ha-406505)   <vcpu>2</vcpu>
	I1007 10:46:01.207093   23621 main.go:141] libmachine: (ha-406505)   <features>
	I1007 10:46:01.207097   23621 main.go:141] libmachine: (ha-406505)     <acpi/>
	I1007 10:46:01.207103   23621 main.go:141] libmachine: (ha-406505)     <apic/>
	I1007 10:46:01.207108   23621 main.go:141] libmachine: (ha-406505)     <pae/>
	I1007 10:46:01.207115   23621 main.go:141] libmachine: (ha-406505)     
	I1007 10:46:01.207120   23621 main.go:141] libmachine: (ha-406505)   </features>
	I1007 10:46:01.207124   23621 main.go:141] libmachine: (ha-406505)   <cpu mode='host-passthrough'>
	I1007 10:46:01.207129   23621 main.go:141] libmachine: (ha-406505)   
	I1007 10:46:01.207133   23621 main.go:141] libmachine: (ha-406505)   </cpu>
	I1007 10:46:01.207137   23621 main.go:141] libmachine: (ha-406505)   <os>
	I1007 10:46:01.207141   23621 main.go:141] libmachine: (ha-406505)     <type>hvm</type>
	I1007 10:46:01.207145   23621 main.go:141] libmachine: (ha-406505)     <boot dev='cdrom'/>
	I1007 10:46:01.207150   23621 main.go:141] libmachine: (ha-406505)     <boot dev='hd'/>
	I1007 10:46:01.207154   23621 main.go:141] libmachine: (ha-406505)     <bootmenu enable='no'/>
	I1007 10:46:01.207161   23621 main.go:141] libmachine: (ha-406505)   </os>
	I1007 10:46:01.207186   23621 main.go:141] libmachine: (ha-406505)   <devices>
	I1007 10:46:01.207206   23621 main.go:141] libmachine: (ha-406505)     <disk type='file' device='cdrom'>
	I1007 10:46:01.207220   23621 main.go:141] libmachine: (ha-406505)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/boot2docker.iso'/>
	I1007 10:46:01.207236   23621 main.go:141] libmachine: (ha-406505)       <target dev='hdc' bus='scsi'/>
	I1007 10:46:01.207250   23621 main.go:141] libmachine: (ha-406505)       <readonly/>
	I1007 10:46:01.207259   23621 main.go:141] libmachine: (ha-406505)     </disk>
	I1007 10:46:01.207281   23621 main.go:141] libmachine: (ha-406505)     <disk type='file' device='disk'>
	I1007 10:46:01.207300   23621 main.go:141] libmachine: (ha-406505)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:46:01.207324   23621 main.go:141] libmachine: (ha-406505)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/ha-406505.rawdisk'/>
	I1007 10:46:01.207335   23621 main.go:141] libmachine: (ha-406505)       <target dev='hda' bus='virtio'/>
	I1007 10:46:01.207342   23621 main.go:141] libmachine: (ha-406505)     </disk>
	I1007 10:46:01.207348   23621 main.go:141] libmachine: (ha-406505)     <interface type='network'>
	I1007 10:46:01.207354   23621 main.go:141] libmachine: (ha-406505)       <source network='mk-ha-406505'/>
	I1007 10:46:01.207361   23621 main.go:141] libmachine: (ha-406505)       <model type='virtio'/>
	I1007 10:46:01.207369   23621 main.go:141] libmachine: (ha-406505)     </interface>
	I1007 10:46:01.207381   23621 main.go:141] libmachine: (ha-406505)     <interface type='network'>
	I1007 10:46:01.207395   23621 main.go:141] libmachine: (ha-406505)       <source network='default'/>
	I1007 10:46:01.207406   23621 main.go:141] libmachine: (ha-406505)       <model type='virtio'/>
	I1007 10:46:01.207415   23621 main.go:141] libmachine: (ha-406505)     </interface>
	I1007 10:46:01.207422   23621 main.go:141] libmachine: (ha-406505)     <serial type='pty'>
	I1007 10:46:01.207432   23621 main.go:141] libmachine: (ha-406505)       <target port='0'/>
	I1007 10:46:01.207442   23621 main.go:141] libmachine: (ha-406505)     </serial>
	I1007 10:46:01.207469   23621 main.go:141] libmachine: (ha-406505)     <console type='pty'>
	I1007 10:46:01.207491   23621 main.go:141] libmachine: (ha-406505)       <target type='serial' port='0'/>
	I1007 10:46:01.207513   23621 main.go:141] libmachine: (ha-406505)     </console>
	I1007 10:46:01.207526   23621 main.go:141] libmachine: (ha-406505)     <rng model='virtio'>
	I1007 10:46:01.207539   23621 main.go:141] libmachine: (ha-406505)       <backend model='random'>/dev/random</backend>
	I1007 10:46:01.207548   23621 main.go:141] libmachine: (ha-406505)     </rng>
	I1007 10:46:01.207554   23621 main.go:141] libmachine: (ha-406505)     
	I1007 10:46:01.207563   23621 main.go:141] libmachine: (ha-406505)     
	I1007 10:46:01.207572   23621 main.go:141] libmachine: (ha-406505)   </devices>
	I1007 10:46:01.207587   23621 main.go:141] libmachine: (ha-406505) </domain>
	I1007 10:46:01.207603   23621 main.go:141] libmachine: (ha-406505) 
	I1007 10:46:01.211673   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:76:8f:a7 in network default
	I1007 10:46:01.212309   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:01.212331   23621 main.go:141] libmachine: (ha-406505) Ensuring networks are active...
	I1007 10:46:01.212999   23621 main.go:141] libmachine: (ha-406505) Ensuring network default is active
	I1007 10:46:01.213295   23621 main.go:141] libmachine: (ha-406505) Ensuring network mk-ha-406505 is active
	I1007 10:46:01.213746   23621 main.go:141] libmachine: (ha-406505) Getting domain xml...
	I1007 10:46:01.214325   23621 main.go:141] libmachine: (ha-406505) Creating domain...
	I1007 10:46:02.421940   23621 main.go:141] libmachine: (ha-406505) Waiting to get IP...
	I1007 10:46:02.422559   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:02.422963   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:02.423013   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:02.422950   23644 retry.go:31] will retry after 195.328474ms: waiting for machine to come up
	I1007 10:46:02.620556   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:02.621120   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:02.621158   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:02.621075   23644 retry.go:31] will retry after 387.449002ms: waiting for machine to come up
	I1007 10:46:03.009575   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:03.010111   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:03.010135   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:03.010073   23644 retry.go:31] will retry after 404.721004ms: waiting for machine to come up
	I1007 10:46:03.416746   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:03.417186   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:03.417213   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:03.417138   23644 retry.go:31] will retry after 372.059443ms: waiting for machine to come up
	I1007 10:46:03.790603   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:03.791114   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:03.791143   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:03.791071   23644 retry.go:31] will retry after 494.767467ms: waiting for machine to come up
	I1007 10:46:04.287716   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:04.288192   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:04.288211   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:04.288147   23644 retry.go:31] will retry after 903.556325ms: waiting for machine to come up
	I1007 10:46:05.193010   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:05.193532   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:05.193599   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:05.193453   23644 retry.go:31] will retry after 1.025768675s: waiting for machine to come up
	I1007 10:46:06.220323   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:06.220836   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:06.220866   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:06.220776   23644 retry.go:31] will retry after 1.100294717s: waiting for machine to come up
	I1007 10:46:07.323044   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:07.323554   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:07.323582   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:07.323505   23644 retry.go:31] will retry after 1.146070621s: waiting for machine to come up
	I1007 10:46:08.470888   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:08.471336   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:08.471361   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:08.471279   23644 retry.go:31] will retry after 2.296444266s: waiting for machine to come up
	I1007 10:46:10.768938   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:10.769285   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:10.769343   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:10.769271   23644 retry.go:31] will retry after 2.239094894s: waiting for machine to come up
	I1007 10:46:13.010328   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:13.010763   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:13.010789   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:13.010721   23644 retry.go:31] will retry after 3.13857084s: waiting for machine to come up
	I1007 10:46:16.150462   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:16.150858   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:16.150885   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:16.150808   23644 retry.go:31] will retry after 3.125257266s: waiting for machine to come up
	I1007 10:46:19.280079   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:19.280531   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:19.280561   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:19.280474   23644 retry.go:31] will retry after 5.119838312s: waiting for machine to come up
	I1007 10:46:24.405645   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.406055   23621 main.go:141] libmachine: (ha-406505) Found IP for machine: 192.168.39.250
	I1007 10:46:24.406093   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has current primary IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.406101   23621 main.go:141] libmachine: (ha-406505) Reserving static IP address...
	I1007 10:46:24.406506   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find host DHCP lease matching {name: "ha-406505", mac: "52:54:00:1d:e2:d7", ip: "192.168.39.250"} in network mk-ha-406505
	I1007 10:46:24.482533   23621 main.go:141] libmachine: (ha-406505) DBG | Getting to WaitForSSH function...
	I1007 10:46:24.482567   23621 main.go:141] libmachine: (ha-406505) Reserved static IP address: 192.168.39.250
	I1007 10:46:24.482583   23621 main.go:141] libmachine: (ha-406505) Waiting for SSH to be available...
	I1007 10:46:24.485308   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.485711   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.485764   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.485909   23621 main.go:141] libmachine: (ha-406505) DBG | Using SSH client type: external
	I1007 10:46:24.485935   23621 main.go:141] libmachine: (ha-406505) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa (-rw-------)
	I1007 10:46:24.485971   23621 main.go:141] libmachine: (ha-406505) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:46:24.485988   23621 main.go:141] libmachine: (ha-406505) DBG | About to run SSH command:
	I1007 10:46:24.486003   23621 main.go:141] libmachine: (ha-406505) DBG | exit 0
	I1007 10:46:24.612334   23621 main.go:141] libmachine: (ha-406505) DBG | SSH cmd err, output: <nil>: 
	I1007 10:46:24.612631   23621 main.go:141] libmachine: (ha-406505) KVM machine creation complete!
	I1007 10:46:24.613069   23621 main.go:141] libmachine: (ha-406505) Calling .GetConfigRaw
	I1007 10:46:24.613769   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:24.614010   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:24.614210   23621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:46:24.614233   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:24.615544   23621 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:46:24.615563   23621 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:46:24.615570   23621 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:46:24.615577   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.617899   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.618287   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.618310   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.618494   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.618666   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.618809   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.618921   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.619056   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.619306   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.619320   23621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:46:24.727419   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:46:24.727448   23621 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:46:24.727458   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.730240   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.730602   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.730629   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.730740   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.730937   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.731096   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.731252   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.731417   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.731578   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.731587   23621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:46:24.845378   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:46:24.845478   23621 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:46:24.845490   23621 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:46:24.845498   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:24.845780   23621 buildroot.go:166] provisioning hostname "ha-406505"
	I1007 10:46:24.845810   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:24.846017   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.849059   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.849533   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.849565   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.849690   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.849892   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.850056   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.850226   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.850372   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.850530   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.850541   23621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505 && echo "ha-406505" | sudo tee /etc/hostname
	I1007 10:46:24.974484   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:46:24.974507   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.977334   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.977841   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.977876   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.978053   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.978231   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.978390   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.978528   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.978725   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.978910   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.978926   23621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:46:25.097736   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:46:25.097768   23621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:46:25.097810   23621 buildroot.go:174] setting up certificates
	I1007 10:46:25.097819   23621 provision.go:84] configureAuth start
	I1007 10:46:25.097832   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:25.098143   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:25.100773   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.101119   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.101156   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.101261   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.103487   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.103793   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.103821   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.103966   23621 provision.go:143] copyHostCerts
	I1007 10:46:25.104016   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:46:25.104068   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:46:25.104102   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:46:25.104302   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:46:25.104436   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:46:25.104469   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:46:25.104478   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:46:25.104534   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:46:25.104606   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:46:25.104633   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:46:25.104641   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:46:25.104691   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:46:25.104770   23621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505 san=[127.0.0.1 192.168.39.250 ha-406505 localhost minikube]
	I1007 10:46:25.393470   23621 provision.go:177] copyRemoteCerts
	I1007 10:46:25.393548   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:46:25.393578   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.396327   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.396617   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.396642   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.396839   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.397030   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.397230   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.397379   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:25.482559   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:46:25.482632   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1007 10:46:25.508425   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:46:25.508519   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:46:25.534913   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:46:25.534986   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:46:25.560790   23621 provision.go:87] duration metric: took 462.953383ms to configureAuth
	I1007 10:46:25.560817   23621 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:46:25.560982   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:46:25.561053   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.563730   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.564168   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.564201   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.564402   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.564589   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.564760   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.564923   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.565085   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:25.565253   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:25.565272   23621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:46:25.800362   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:46:25.800389   23621 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:46:25.800397   23621 main.go:141] libmachine: (ha-406505) Calling .GetURL
	I1007 10:46:25.801606   23621 main.go:141] libmachine: (ha-406505) DBG | Using libvirt version 6000000
	I1007 10:46:25.803904   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.804248   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.804273   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.804397   23621 main.go:141] libmachine: Docker is up and running!
	I1007 10:46:25.804414   23621 main.go:141] libmachine: Reticulating splines...
	I1007 10:46:25.804421   23621 client.go:171] duration metric: took 25.026640958s to LocalClient.Create
	I1007 10:46:25.804457   23621 start.go:167] duration metric: took 25.026720726s to libmachine.API.Create "ha-406505"
	I1007 10:46:25.804469   23621 start.go:293] postStartSetup for "ha-406505" (driver="kvm2")
	I1007 10:46:25.804483   23621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:46:25.804519   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:25.804801   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:46:25.804822   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.806847   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.807242   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.807267   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.807402   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.807601   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.807734   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.807837   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:25.896212   23621 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:46:25.901311   23621 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:46:25.901340   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:46:25.901403   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:46:25.901507   23621 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:46:25.901521   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:46:25.901647   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:46:25.912163   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:46:25.940558   23621 start.go:296] duration metric: took 136.073342ms for postStartSetup
	I1007 10:46:25.940602   23621 main.go:141] libmachine: (ha-406505) Calling .GetConfigRaw
	I1007 10:46:25.941179   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:25.943928   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.944270   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.944295   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.944594   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:25.944766   23621 start.go:128] duration metric: took 25.185278256s to createHost
	I1007 10:46:25.944788   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.946920   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.947236   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.947263   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.947390   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.947554   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.947698   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.947796   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.947917   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:25.948107   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:25.948122   23621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:46:26.057285   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728297986.034090654
	
	I1007 10:46:26.057320   23621 fix.go:216] guest clock: 1728297986.034090654
	I1007 10:46:26.057332   23621 fix.go:229] Guest: 2024-10-07 10:46:26.034090654 +0000 UTC Remote: 2024-10-07 10:46:25.944777719 +0000 UTC m=+25.297917279 (delta=89.312935ms)
	I1007 10:46:26.057360   23621 fix.go:200] guest clock delta is within tolerance: 89.312935ms
	I1007 10:46:26.057368   23621 start.go:83] releasing machines lock for "ha-406505", held for 25.297953369s
	I1007 10:46:26.057394   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.057664   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:26.060710   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.061183   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:26.061235   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.061454   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.061984   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.062147   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.062276   23621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:46:26.062317   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:26.062353   23621 ssh_runner.go:195] Run: cat /version.json
	I1007 10:46:26.062375   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:26.065089   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065433   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065561   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:26.065589   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065720   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:26.065828   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:26.065853   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065883   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:26.065971   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:26.066095   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:26.066095   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:26.066234   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:26.066283   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:26.066351   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:26.174687   23621 ssh_runner.go:195] Run: systemctl --version
	I1007 10:46:26.181055   23621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:46:26.339685   23621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:46:26.346234   23621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:46:26.346285   23621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:46:26.362376   23621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:46:26.362399   23621 start.go:495] detecting cgroup driver to use...
	I1007 10:46:26.362452   23621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:46:26.378080   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:46:26.392505   23621 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:46:26.392560   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:46:26.406784   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:46:26.422960   23621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:46:26.552971   23621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:46:26.690240   23621 docker.go:233] disabling docker service ...
	I1007 10:46:26.690309   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:46:26.706428   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:46:26.721025   23621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:46:26.853079   23621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:46:26.978324   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:46:26.994454   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:46:27.014137   23621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:46:27.014198   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.025749   23621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:46:27.025816   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.037748   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.049263   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.062174   23621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:46:27.074940   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.086608   23621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.104859   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.116719   23621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:46:27.127669   23621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:46:27.127745   23621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:46:27.142518   23621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:46:27.153045   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:46:27.275924   23621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:46:27.373391   23621 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:46:27.373475   23621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:46:27.378225   23621 start.go:563] Will wait 60s for crictl version
	I1007 10:46:27.378286   23621 ssh_runner.go:195] Run: which crictl
	I1007 10:46:27.382179   23621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:46:27.423267   23621 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:46:27.423395   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:46:27.453236   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:46:27.483657   23621 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:46:27.484938   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:27.487606   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:27.487998   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:27.488028   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:27.488343   23621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:46:27.492528   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:46:27.506306   23621 kubeadm.go:883] updating cluster {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:46:27.506405   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:46:27.506452   23621 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:46:27.539872   23621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 10:46:27.539951   23621 ssh_runner.go:195] Run: which lz4
	I1007 10:46:27.544145   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 10:46:27.544248   23621 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 10:46:27.549024   23621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 10:46:27.549064   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 10:46:28.958319   23621 crio.go:462] duration metric: took 1.414106826s to copy over tarball
	I1007 10:46:28.958395   23621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 10:46:30.997682   23621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.039251996s)
	I1007 10:46:30.997713   23621 crio.go:469] duration metric: took 2.039368509s to extract the tarball
	I1007 10:46:30.997720   23621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 10:46:31.039009   23621 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:46:31.088841   23621 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:46:31.088866   23621 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:46:31.088873   23621 kubeadm.go:934] updating node { 192.168.39.250 8443 v1.31.1 crio true true} ...
	I1007 10:46:31.089007   23621 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:46:31.089099   23621 ssh_runner.go:195] Run: crio config
	I1007 10:46:31.133611   23621 cni.go:84] Creating CNI manager for ""
	I1007 10:46:31.133634   23621 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 10:46:31.133642   23621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:46:31.133662   23621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406505 NodeName:ha-406505 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:46:31.133799   23621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406505"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:46:31.133825   23621 kube-vip.go:115] generating kube-vip config ...
	I1007 10:46:31.133864   23621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:46:31.150299   23621 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:46:31.150386   23621 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:46:31.150432   23621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:46:31.160704   23621 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:46:31.160771   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 10:46:31.170635   23621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 10:46:31.188233   23621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:46:31.205276   23621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 10:46:31.222191   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1007 10:46:31.240224   23621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:46:31.244214   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:46:31.257345   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:46:31.397967   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:46:31.417027   23621 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.250
	I1007 10:46:31.417077   23621 certs.go:194] generating shared ca certs ...
	I1007 10:46:31.417100   23621 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.417284   23621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:46:31.417383   23621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:46:31.417398   23621 certs.go:256] generating profile certs ...
	I1007 10:46:31.417447   23621 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:46:31.417461   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt with IP's: []
	I1007 10:46:31.468016   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt ...
	I1007 10:46:31.468047   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt: {Name:mk762d603dc2fbb5c1297f6a7a3cc345fce24083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.468271   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key ...
	I1007 10:46:31.468286   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key: {Name:mk7067411a96e86ff81d9c76638d9b65fd88775f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.468374   23621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad
	I1007 10:46:31.468389   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.254]
	I1007 10:46:31.560197   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad ...
	I1007 10:46:31.560235   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad: {Name:mk03ccdd590c02d4a8e3fdabb8ce2b00441c3bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.560434   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad ...
	I1007 10:46:31.560450   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad: {Name:mk9acbd48737ac1a11351bcc3c9e01a19e35889d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.560533   23621 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:46:31.560605   23621 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:46:31.560660   23621 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:46:31.560674   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt with IP's: []
	I1007 10:46:31.824715   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt ...
	I1007 10:46:31.824745   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt: {Name:mk2f87794c4b3ce39df0df4382fd33d9633bb32b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.824924   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key ...
	I1007 10:46:31.824937   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key: {Name:mka71f56202903b2b66df7c3367c064cbfe379ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.825016   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:46:31.825037   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:46:31.825053   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:46:31.825068   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:46:31.825083   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:46:31.825098   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:46:31.825112   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:46:31.825130   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:46:31.825188   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:46:31.825225   23621 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:46:31.825236   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:46:31.825267   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:46:31.825296   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:46:31.825321   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:46:31.825363   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:46:31.825391   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:31.825407   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:46:31.825421   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:46:31.825934   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:46:31.854979   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:46:31.881623   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:46:31.908276   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:46:31.933657   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 10:46:31.959947   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:46:31.985851   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:46:32.010600   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:46:32.035549   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:46:32.060173   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:46:32.084842   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:46:32.110513   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:46:32.129118   23621 ssh_runner.go:195] Run: openssl version
	I1007 10:46:32.134991   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:46:32.146083   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:46:32.150750   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:46:32.150813   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:46:32.156917   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:46:32.167842   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:46:32.179302   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:46:32.184104   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:46:32.184166   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:46:32.189957   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:46:32.203820   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:46:32.218928   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:32.223877   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:32.223932   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:32.234358   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:46:32.254776   23621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:46:32.262324   23621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:46:32.262372   23621 kubeadm.go:392] StartCluster: {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:46:32.262436   23621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:46:32.262503   23621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:46:32.310104   23621 cri.go:89] found id: ""
	I1007 10:46:32.310161   23621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 10:46:32.319996   23621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 10:46:32.329800   23621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 10:46:32.339655   23621 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 10:46:32.339683   23621 kubeadm.go:157] found existing configuration files:
	
	I1007 10:46:32.339722   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 10:46:32.348661   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 10:46:32.348719   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 10:46:32.358855   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 10:46:32.368082   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 10:46:32.368138   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 10:46:32.378072   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 10:46:32.387338   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 10:46:32.387394   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 10:46:32.397186   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 10:46:32.406684   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 10:46:32.406738   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 10:46:32.417090   23621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 10:46:32.545879   23621 kubeadm.go:310] W1007 10:46:32.529591     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:46:32.546834   23621 kubeadm.go:310] W1007 10:46:32.530709     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:46:32.656304   23621 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 10:46:43.090298   23621 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 10:46:43.090373   23621 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 10:46:43.090492   23621 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 10:46:43.090653   23621 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 10:46:43.090862   23621 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 10:46:43.090964   23621 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 10:46:43.092688   23621 out.go:235]   - Generating certificates and keys ...
	I1007 10:46:43.092759   23621 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 10:46:43.092833   23621 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 10:46:43.092901   23621 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 10:46:43.092950   23621 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 10:46:43.092999   23621 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 10:46:43.093054   23621 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 10:46:43.093106   23621 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 10:46:43.093205   23621 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-406505 localhost] and IPs [192.168.39.250 127.0.0.1 ::1]
	I1007 10:46:43.093261   23621 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 10:46:43.093417   23621 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-406505 localhost] and IPs [192.168.39.250 127.0.0.1 ::1]
	I1007 10:46:43.093514   23621 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 10:46:43.093567   23621 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 10:46:43.093623   23621 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 10:46:43.093706   23621 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 10:46:43.093782   23621 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 10:46:43.093856   23621 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 10:46:43.093933   23621 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 10:46:43.094023   23621 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 10:46:43.094096   23621 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 10:46:43.094210   23621 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 10:46:43.094282   23621 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 10:46:43.095798   23621 out.go:235]   - Booting up control plane ...
	I1007 10:46:43.095884   23621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 10:46:43.095971   23621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 10:46:43.096065   23621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 10:46:43.096171   23621 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 10:46:43.096294   23621 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 10:46:43.096350   23621 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 10:46:43.096510   23621 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 10:46:43.096664   23621 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 10:46:43.096745   23621 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.992623ms
	I1007 10:46:43.096840   23621 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 10:46:43.096957   23621 kubeadm.go:310] [api-check] The API server is healthy after 6.063891261s
	I1007 10:46:43.097083   23621 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 10:46:43.097207   23621 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 10:46:43.097264   23621 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 10:46:43.097410   23621 kubeadm.go:310] [mark-control-plane] Marking the node ha-406505 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 10:46:43.097470   23621 kubeadm.go:310] [bootstrap-token] Using token: wypuxz.8mosh3hhf4vr1jtg
	I1007 10:46:43.098950   23621 out.go:235]   - Configuring RBAC rules ...
	I1007 10:46:43.099071   23621 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 10:46:43.099163   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 10:46:43.099343   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 10:46:43.099509   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 10:46:43.099662   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 10:46:43.099752   23621 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 10:46:43.099910   23621 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 10:46:43.099999   23621 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 10:46:43.100092   23621 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 10:46:43.100101   23621 kubeadm.go:310] 
	I1007 10:46:43.100184   23621 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 10:46:43.100194   23621 kubeadm.go:310] 
	I1007 10:46:43.100298   23621 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 10:46:43.100307   23621 kubeadm.go:310] 
	I1007 10:46:43.100344   23621 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 10:46:43.100433   23621 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 10:46:43.100524   23621 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 10:46:43.100533   23621 kubeadm.go:310] 
	I1007 10:46:43.100614   23621 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 10:46:43.100626   23621 kubeadm.go:310] 
	I1007 10:46:43.100698   23621 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 10:46:43.100713   23621 kubeadm.go:310] 
	I1007 10:46:43.100756   23621 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 10:46:43.100822   23621 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 10:46:43.100914   23621 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 10:46:43.100930   23621 kubeadm.go:310] 
	I1007 10:46:43.101035   23621 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 10:46:43.101136   23621 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 10:46:43.101145   23621 kubeadm.go:310] 
	I1007 10:46:43.101255   23621 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wypuxz.8mosh3hhf4vr1jtg \
	I1007 10:46:43.101367   23621 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df \
	I1007 10:46:43.101400   23621 kubeadm.go:310] 	--control-plane 
	I1007 10:46:43.101407   23621 kubeadm.go:310] 
	I1007 10:46:43.101475   23621 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 10:46:43.101485   23621 kubeadm.go:310] 
	I1007 10:46:43.101546   23621 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wypuxz.8mosh3hhf4vr1jtg \
	I1007 10:46:43.101655   23621 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df 
	I1007 10:46:43.101680   23621 cni.go:84] Creating CNI manager for ""
	I1007 10:46:43.101688   23621 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 10:46:43.103490   23621 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 10:46:43.104857   23621 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 10:46:43.110599   23621 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 10:46:43.110619   23621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 10:46:43.132034   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 10:46:43.562211   23621 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 10:46:43.562270   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:43.562324   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406505 minikube.k8s.io/updated_at=2024_10_07T10_46_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=ha-406505 minikube.k8s.io/primary=true
	I1007 10:46:43.616727   23621 ops.go:34] apiserver oom_adj: -16
	I1007 10:46:43.782316   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:44.282755   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:44.782532   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:45.283204   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:45.783063   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:46.283266   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:46.783411   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:46.943992   23621 kubeadm.go:1113] duration metric: took 3.381769921s to wait for elevateKubeSystemPrivileges
	I1007 10:46:46.944035   23621 kubeadm.go:394] duration metric: took 14.681663569s to StartCluster
	I1007 10:46:46.944056   23621 settings.go:142] acquiring lock: {Name:mk699f217216dbe513edf6a42c79fe85f8c20124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:46.944147   23621 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:46:46.945102   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/kubeconfig: {Name:mkc8a5ce1dbafe55e056433fff5c065506f83346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:46.945388   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 10:46:46.945386   23621 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:46:46.945413   23621 start.go:241] waiting for startup goroutines ...
	I1007 10:46:46.945429   23621 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 10:46:46.945523   23621 addons.go:69] Setting storage-provisioner=true in profile "ha-406505"
	I1007 10:46:46.945543   23621 addons.go:234] Setting addon storage-provisioner=true in "ha-406505"
	I1007 10:46:46.945553   23621 addons.go:69] Setting default-storageclass=true in profile "ha-406505"
	I1007 10:46:46.945572   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:46:46.945583   23621 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406505"
	I1007 10:46:46.945607   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:46:46.946008   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.946009   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.946088   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.946051   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.961784   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1007 10:46:46.961861   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42021
	I1007 10:46:46.962343   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.962400   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.962845   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.962858   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.962977   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.962998   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.963231   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.963434   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.963629   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:46.963828   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.963879   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.966424   23621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:46:46.966748   23621 kapi.go:59] client config for ha-406505: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key", CAFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 10:46:46.967326   23621 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 10:46:46.967544   23621 addons.go:234] Setting addon default-storageclass=true in "ha-406505"
	I1007 10:46:46.967595   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:46:46.967974   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.968044   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.980041   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40697
	I1007 10:46:46.980679   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.981275   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.981307   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.981679   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.981861   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:46.982917   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I1007 10:46:46.983418   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.983677   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:46.983888   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.983902   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.984223   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.984726   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.984780   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.985635   23621 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 10:46:46.986794   23621 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:46:46.986811   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 10:46:46.986827   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:46.990137   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:46.990593   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:46.990630   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:46.990792   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:46.990980   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:46.991153   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:46.991295   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:47.000938   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I1007 10:46:47.001317   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:47.001822   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:47.001835   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:47.002157   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:47.002359   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:47.004192   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:47.004381   23621 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 10:46:47.004396   23621 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 10:46:47.004415   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:47.007286   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:47.007709   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:47.007733   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:47.007859   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:47.008018   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:47.008149   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:47.008248   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:47.195335   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 10:46:47.217916   23621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:46:47.332630   23621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 10:46:47.810865   23621 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 10:46:48.064696   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.064705   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.064720   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.064727   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.064985   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.065031   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.065048   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.065053   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.065058   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.064988   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.065100   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.065116   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.065125   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.065104   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.065227   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.065239   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.066429   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.066481   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.066520   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.066607   23621 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 10:46:48.066629   23621 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 10:46:48.066712   23621 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1007 10:46:48.066721   23621 round_trippers.go:469] Request Headers:
	I1007 10:46:48.066729   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:46:48.066749   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:46:48.079736   23621 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1007 10:46:48.080394   23621 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1007 10:46:48.080409   23621 round_trippers.go:469] Request Headers:
	I1007 10:46:48.080417   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:46:48.080421   23621 round_trippers.go:473]     Content-Type: application/json
	I1007 10:46:48.080424   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:46:48.082744   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:46:48.082873   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.082885   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.083166   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.083174   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.083188   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.084834   23621 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 10:46:48.085997   23621 addons.go:510] duration metric: took 1.140572645s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 10:46:48.086031   23621 start.go:246] waiting for cluster config update ...
	I1007 10:46:48.086044   23621 start.go:255] writing updated cluster config ...
	I1007 10:46:48.087964   23621 out.go:201] 
	I1007 10:46:48.089528   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:46:48.089609   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:48.091151   23621 out.go:177] * Starting "ha-406505-m02" control-plane node in "ha-406505" cluster
	I1007 10:46:48.092447   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:46:48.092473   23621 cache.go:56] Caching tarball of preloaded images
	I1007 10:46:48.092563   23621 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:46:48.092574   23621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:46:48.092637   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:48.092794   23621 start.go:360] acquireMachinesLock for ha-406505-m02: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:46:48.092831   23621 start.go:364] duration metric: took 21.347µs to acquireMachinesLock for "ha-406505-m02"
	I1007 10:46:48.092855   23621 start.go:93] Provisioning new machine with config: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:46:48.092915   23621 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1007 10:46:48.094418   23621 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 10:46:48.094505   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:48.094537   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:48.110315   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I1007 10:46:48.110866   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:48.111379   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:48.111403   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:48.111770   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:48.111953   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:46:48.112082   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:46:48.112219   23621 start.go:159] libmachine.API.Create for "ha-406505" (driver="kvm2")
	I1007 10:46:48.112248   23621 client.go:168] LocalClient.Create starting
	I1007 10:46:48.112287   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:46:48.112335   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:48.112356   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:48.112422   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:46:48.112452   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:48.112468   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:48.112494   23621 main.go:141] libmachine: Running pre-create checks...
	I1007 10:46:48.112506   23621 main.go:141] libmachine: (ha-406505-m02) Calling .PreCreateCheck
	I1007 10:46:48.112657   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetConfigRaw
	I1007 10:46:48.113018   23621 main.go:141] libmachine: Creating machine...
	I1007 10:46:48.113031   23621 main.go:141] libmachine: (ha-406505-m02) Calling .Create
	I1007 10:46:48.113183   23621 main.go:141] libmachine: (ha-406505-m02) Creating KVM machine...
	I1007 10:46:48.114398   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found existing default KVM network
	I1007 10:46:48.114519   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found existing private KVM network mk-ha-406505
	I1007 10:46:48.114657   23621 main.go:141] libmachine: (ha-406505-m02) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02 ...
	I1007 10:46:48.114682   23621 main.go:141] libmachine: (ha-406505-m02) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:46:48.114793   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.114651   23988 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:48.114857   23621 main.go:141] libmachine: (ha-406505-m02) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:46:48.352057   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.351887   23988 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa...
	I1007 10:46:48.484305   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.484165   23988 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/ha-406505-m02.rawdisk...
	I1007 10:46:48.484357   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Writing magic tar header
	I1007 10:46:48.484379   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Writing SSH key tar header
	I1007 10:46:48.484391   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.484280   23988 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02 ...
	I1007 10:46:48.484403   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02 (perms=drwx------)
	I1007 10:46:48.484420   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:46:48.484433   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02
	I1007 10:46:48.484444   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:46:48.484459   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:46:48.484478   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:46:48.484491   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:46:48.484510   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:46:48.484523   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:48.484535   23621 main.go:141] libmachine: (ha-406505-m02) Creating domain...
	I1007 10:46:48.484554   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:46:48.484571   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:46:48.484583   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:46:48.484602   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home
	I1007 10:46:48.484618   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Skipping /home - not owner
	I1007 10:46:48.485445   23621 main.go:141] libmachine: (ha-406505-m02) define libvirt domain using xml: 
	I1007 10:46:48.485473   23621 main.go:141] libmachine: (ha-406505-m02) <domain type='kvm'>
	I1007 10:46:48.485489   23621 main.go:141] libmachine: (ha-406505-m02)   <name>ha-406505-m02</name>
	I1007 10:46:48.485497   23621 main.go:141] libmachine: (ha-406505-m02)   <memory unit='MiB'>2200</memory>
	I1007 10:46:48.485528   23621 main.go:141] libmachine: (ha-406505-m02)   <vcpu>2</vcpu>
	I1007 10:46:48.485552   23621 main.go:141] libmachine: (ha-406505-m02)   <features>
	I1007 10:46:48.485563   23621 main.go:141] libmachine: (ha-406505-m02)     <acpi/>
	I1007 10:46:48.485574   23621 main.go:141] libmachine: (ha-406505-m02)     <apic/>
	I1007 10:46:48.485584   23621 main.go:141] libmachine: (ha-406505-m02)     <pae/>
	I1007 10:46:48.485596   23621 main.go:141] libmachine: (ha-406505-m02)     
	I1007 10:46:48.485608   23621 main.go:141] libmachine: (ha-406505-m02)   </features>
	I1007 10:46:48.485625   23621 main.go:141] libmachine: (ha-406505-m02)   <cpu mode='host-passthrough'>
	I1007 10:46:48.485637   23621 main.go:141] libmachine: (ha-406505-m02)   
	I1007 10:46:48.485645   23621 main.go:141] libmachine: (ha-406505-m02)   </cpu>
	I1007 10:46:48.485659   23621 main.go:141] libmachine: (ha-406505-m02)   <os>
	I1007 10:46:48.485671   23621 main.go:141] libmachine: (ha-406505-m02)     <type>hvm</type>
	I1007 10:46:48.485684   23621 main.go:141] libmachine: (ha-406505-m02)     <boot dev='cdrom'/>
	I1007 10:46:48.485699   23621 main.go:141] libmachine: (ha-406505-m02)     <boot dev='hd'/>
	I1007 10:46:48.485712   23621 main.go:141] libmachine: (ha-406505-m02)     <bootmenu enable='no'/>
	I1007 10:46:48.485721   23621 main.go:141] libmachine: (ha-406505-m02)   </os>
	I1007 10:46:48.485801   23621 main.go:141] libmachine: (ha-406505-m02)   <devices>
	I1007 10:46:48.485824   23621 main.go:141] libmachine: (ha-406505-m02)     <disk type='file' device='cdrom'>
	I1007 10:46:48.485840   23621 main.go:141] libmachine: (ha-406505-m02)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/boot2docker.iso'/>
	I1007 10:46:48.485854   23621 main.go:141] libmachine: (ha-406505-m02)       <target dev='hdc' bus='scsi'/>
	I1007 10:46:48.485865   23621 main.go:141] libmachine: (ha-406505-m02)       <readonly/>
	I1007 10:46:48.485875   23621 main.go:141] libmachine: (ha-406505-m02)     </disk>
	I1007 10:46:48.485902   23621 main.go:141] libmachine: (ha-406505-m02)     <disk type='file' device='disk'>
	I1007 10:46:48.485924   23621 main.go:141] libmachine: (ha-406505-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:46:48.485938   23621 main.go:141] libmachine: (ha-406505-m02)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/ha-406505-m02.rawdisk'/>
	I1007 10:46:48.485950   23621 main.go:141] libmachine: (ha-406505-m02)       <target dev='hda' bus='virtio'/>
	I1007 10:46:48.485972   23621 main.go:141] libmachine: (ha-406505-m02)     </disk>
	I1007 10:46:48.485982   23621 main.go:141] libmachine: (ha-406505-m02)     <interface type='network'>
	I1007 10:46:48.485991   23621 main.go:141] libmachine: (ha-406505-m02)       <source network='mk-ha-406505'/>
	I1007 10:46:48.485999   23621 main.go:141] libmachine: (ha-406505-m02)       <model type='virtio'/>
	I1007 10:46:48.486005   23621 main.go:141] libmachine: (ha-406505-m02)     </interface>
	I1007 10:46:48.486013   23621 main.go:141] libmachine: (ha-406505-m02)     <interface type='network'>
	I1007 10:46:48.486025   23621 main.go:141] libmachine: (ha-406505-m02)       <source network='default'/>
	I1007 10:46:48.486034   23621 main.go:141] libmachine: (ha-406505-m02)       <model type='virtio'/>
	I1007 10:46:48.486044   23621 main.go:141] libmachine: (ha-406505-m02)     </interface>
	I1007 10:46:48.486053   23621 main.go:141] libmachine: (ha-406505-m02)     <serial type='pty'>
	I1007 10:46:48.486063   23621 main.go:141] libmachine: (ha-406505-m02)       <target port='0'/>
	I1007 10:46:48.486074   23621 main.go:141] libmachine: (ha-406505-m02)     </serial>
	I1007 10:46:48.486084   23621 main.go:141] libmachine: (ha-406505-m02)     <console type='pty'>
	I1007 10:46:48.486094   23621 main.go:141] libmachine: (ha-406505-m02)       <target type='serial' port='0'/>
	I1007 10:46:48.486098   23621 main.go:141] libmachine: (ha-406505-m02)     </console>
	I1007 10:46:48.486106   23621 main.go:141] libmachine: (ha-406505-m02)     <rng model='virtio'>
	I1007 10:46:48.486122   23621 main.go:141] libmachine: (ha-406505-m02)       <backend model='random'>/dev/random</backend>
	I1007 10:46:48.486134   23621 main.go:141] libmachine: (ha-406505-m02)     </rng>
	I1007 10:46:48.486147   23621 main.go:141] libmachine: (ha-406505-m02)     
	I1007 10:46:48.486157   23621 main.go:141] libmachine: (ha-406505-m02)     
	I1007 10:46:48.486167   23621 main.go:141] libmachine: (ha-406505-m02)   </devices>
	I1007 10:46:48.486184   23621 main.go:141] libmachine: (ha-406505-m02) </domain>
	I1007 10:46:48.486192   23621 main.go:141] libmachine: (ha-406505-m02) 
	I1007 10:46:48.492959   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:11:dc:7d in network default
	I1007 10:46:48.493532   23621 main.go:141] libmachine: (ha-406505-m02) Ensuring networks are active...
	I1007 10:46:48.493555   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:48.494204   23621 main.go:141] libmachine: (ha-406505-m02) Ensuring network default is active
	I1007 10:46:48.494531   23621 main.go:141] libmachine: (ha-406505-m02) Ensuring network mk-ha-406505 is active
	I1007 10:46:48.494994   23621 main.go:141] libmachine: (ha-406505-m02) Getting domain xml...
	I1007 10:46:48.495697   23621 main.go:141] libmachine: (ha-406505-m02) Creating domain...
	I1007 10:46:49.708066   23621 main.go:141] libmachine: (ha-406505-m02) Waiting to get IP...
	I1007 10:46:49.709797   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:49.710242   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:49.710274   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:49.710223   23988 retry.go:31] will retry after 204.773065ms: waiting for machine to come up
	I1007 10:46:49.916620   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:49.917029   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:49.917049   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:49.916992   23988 retry.go:31] will retry after 235.714104ms: waiting for machine to come up
	I1007 10:46:50.154409   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:50.154821   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:50.154854   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:50.154800   23988 retry.go:31] will retry after 473.988416ms: waiting for machine to come up
	I1007 10:46:50.630146   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:50.630593   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:50.630617   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:50.630561   23988 retry.go:31] will retry after 436.51933ms: waiting for machine to come up
	I1007 10:46:51.068126   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:51.068602   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:51.068629   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:51.068593   23988 retry.go:31] will retry after 554.772898ms: waiting for machine to come up
	I1007 10:46:51.625423   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:51.625799   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:51.625821   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:51.625760   23988 retry.go:31] will retry after 790.073775ms: waiting for machine to come up
	I1007 10:46:52.417715   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:52.418041   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:52.418068   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:52.417996   23988 retry.go:31] will retry after 1.143940138s: waiting for machine to come up
	I1007 10:46:53.563665   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:53.564172   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:53.564191   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:53.564119   23988 retry.go:31] will retry after 1.216262675s: waiting for machine to come up
	I1007 10:46:54.782182   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:54.782642   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:54.782668   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:54.782571   23988 retry.go:31] will retry after 1.336251943s: waiting for machine to come up
	I1007 10:46:56.120924   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:56.121343   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:56.121364   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:56.121297   23988 retry.go:31] will retry after 2.26253824s: waiting for machine to come up
	I1007 10:46:58.385702   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:58.386103   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:58.386134   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:58.386057   23988 retry.go:31] will retry after 1.827723489s: waiting for machine to come up
	I1007 10:47:00.215316   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:00.215726   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:47:00.215747   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:47:00.215701   23988 retry.go:31] will retry after 2.599258612s: waiting for machine to come up
	I1007 10:47:02.818331   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:02.818781   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:47:02.818803   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:47:02.818737   23988 retry.go:31] will retry after 3.193038382s: waiting for machine to come up
	I1007 10:47:06.014368   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:06.014784   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:47:06.014809   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:47:06.014743   23988 retry.go:31] will retry after 3.576827994s: waiting for machine to come up
	I1007 10:47:09.593923   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:09.594365   23621 main.go:141] libmachine: (ha-406505-m02) Found IP for machine: 192.168.39.37
	I1007 10:47:09.594385   23621 main.go:141] libmachine: (ha-406505-m02) Reserving static IP address...
	I1007 10:47:09.594399   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has current primary IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:09.594746   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find host DHCP lease matching {name: "ha-406505-m02", mac: "52:54:00:c4:d0:65", ip: "192.168.39.37"} in network mk-ha-406505
	I1007 10:47:09.668479   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Getting to WaitForSSH function...
	I1007 10:47:09.668509   23621 main.go:141] libmachine: (ha-406505-m02) Reserved static IP address: 192.168.39.37
	I1007 10:47:09.668519   23621 main.go:141] libmachine: (ha-406505-m02) Waiting for SSH to be available...
	I1007 10:47:09.670956   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:09.671275   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505
	I1007 10:47:09.671303   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find defined IP address of network mk-ha-406505 interface with MAC address 52:54:00:c4:d0:65
	I1007 10:47:09.671456   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH client type: external
	I1007 10:47:09.671481   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa (-rw-------)
	I1007 10:47:09.671540   23621 main.go:141] libmachine: (ha-406505-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:47:09.671566   23621 main.go:141] libmachine: (ha-406505-m02) DBG | About to run SSH command:
	I1007 10:47:09.671585   23621 main.go:141] libmachine: (ha-406505-m02) DBG | exit 0
	I1007 10:47:09.675078   23621 main.go:141] libmachine: (ha-406505-m02) DBG | SSH cmd err, output: exit status 255: 
	I1007 10:47:09.675099   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 10:47:09.675105   23621 main.go:141] libmachine: (ha-406505-m02) DBG | command : exit 0
	I1007 10:47:09.675110   23621 main.go:141] libmachine: (ha-406505-m02) DBG | err     : exit status 255
	I1007 10:47:09.675118   23621 main.go:141] libmachine: (ha-406505-m02) DBG | output  : 
	I1007 10:47:12.677242   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Getting to WaitForSSH function...
	I1007 10:47:12.679802   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.680241   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:12.680268   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.680410   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH client type: external
	I1007 10:47:12.680433   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa (-rw-------)
	I1007 10:47:12.680466   23621 main.go:141] libmachine: (ha-406505-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:47:12.680481   23621 main.go:141] libmachine: (ha-406505-m02) DBG | About to run SSH command:
	I1007 10:47:12.680494   23621 main.go:141] libmachine: (ha-406505-m02) DBG | exit 0
	I1007 10:47:12.804189   23621 main.go:141] libmachine: (ha-406505-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 10:47:12.804446   23621 main.go:141] libmachine: (ha-406505-m02) KVM machine creation complete!
	I1007 10:47:12.804774   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetConfigRaw
	I1007 10:47:12.805439   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:12.805661   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:12.805843   23621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:47:12.805857   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetState
	I1007 10:47:12.807411   23621 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:47:12.807423   23621 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:47:12.807428   23621 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:47:12.807434   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:12.809666   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.809974   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:12.810001   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.810264   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:12.810464   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.810653   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.810803   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:12.810961   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:12.811169   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:12.811184   23621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:47:12.919372   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:47:12.919420   23621 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:47:12.919430   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:12.922565   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.922966   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:12.922996   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.923171   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:12.923359   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.923510   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.923635   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:12.923785   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:12.923977   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:12.924003   23621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:47:13.033371   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:47:13.033448   23621 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:47:13.033459   23621 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:47:13.033472   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:47:13.033744   23621 buildroot.go:166] provisioning hostname "ha-406505-m02"
	I1007 10:47:13.033784   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:47:13.033956   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.036444   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.036782   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.036811   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.036919   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.037077   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.037212   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.037334   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.037500   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:13.037700   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:13.037718   23621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505-m02 && echo "ha-406505-m02" | sudo tee /etc/hostname
	I1007 10:47:13.163957   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505-m02
	
	I1007 10:47:13.164007   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.166790   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.167220   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.167245   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.167419   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.167615   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.167799   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.167934   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.168112   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:13.168270   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:13.168286   23621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:47:13.289811   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:47:13.289837   23621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:47:13.289852   23621 buildroot.go:174] setting up certificates
	I1007 10:47:13.289860   23621 provision.go:84] configureAuth start
	I1007 10:47:13.289876   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:47:13.290178   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:13.292829   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.293122   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.293145   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.293256   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.296131   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.296632   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.296661   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.296855   23621 provision.go:143] copyHostCerts
	I1007 10:47:13.296886   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:47:13.296917   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:47:13.296926   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:47:13.296997   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:47:13.297093   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:47:13.297110   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:47:13.297114   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:47:13.297137   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:47:13.297178   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:47:13.297193   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:47:13.297199   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:47:13.297219   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:47:13.297264   23621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505-m02 san=[127.0.0.1 192.168.39.37 ha-406505-m02 localhost minikube]
	I1007 10:47:13.470867   23621 provision.go:177] copyRemoteCerts
	I1007 10:47:13.470925   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:47:13.470948   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.473620   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.473865   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.473901   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.474152   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.474379   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.474538   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.474650   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:13.558906   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:47:13.558995   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:47:13.584265   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:47:13.584335   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 10:47:13.609098   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:47:13.609208   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 10:47:13.633989   23621 provision.go:87] duration metric: took 344.11512ms to configureAuth
	I1007 10:47:13.634025   23621 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:47:13.634234   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:47:13.634302   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.636945   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.637279   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.637307   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.637491   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.637663   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.637855   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.638031   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.638190   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:13.638341   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:13.638355   23621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:47:13.873602   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:47:13.873628   23621 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:47:13.873636   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetURL
	I1007 10:47:13.874889   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using libvirt version 6000000
	I1007 10:47:13.877460   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.877837   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.877860   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.878084   23621 main.go:141] libmachine: Docker is up and running!
	I1007 10:47:13.878101   23621 main.go:141] libmachine: Reticulating splines...
	I1007 10:47:13.878109   23621 client.go:171] duration metric: took 25.765852825s to LocalClient.Create
	I1007 10:47:13.878137   23621 start.go:167] duration metric: took 25.765919747s to libmachine.API.Create "ha-406505"
	I1007 10:47:13.878150   23621 start.go:293] postStartSetup for "ha-406505-m02" (driver="kvm2")
	I1007 10:47:13.878166   23621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:47:13.878189   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:13.878390   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:47:13.878411   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.880668   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.881014   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.881044   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.881180   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.881364   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.881519   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.881655   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:13.968514   23621 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:47:13.973091   23621 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:47:13.973116   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:47:13.973185   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:47:13.973262   23621 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:47:13.973272   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:47:13.973349   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:47:13.984972   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:47:14.013706   23621 start.go:296] duration metric: took 135.541721ms for postStartSetup
	I1007 10:47:14.013768   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetConfigRaw
	I1007 10:47:14.014387   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:14.017290   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.017760   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.017791   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.018011   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:47:14.018210   23621 start.go:128] duration metric: took 25.92528673s to createHost
	I1007 10:47:14.018236   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:14.020800   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.021086   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.021115   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.021288   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:14.021489   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.021660   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.021768   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:14.021952   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:14.022115   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:14.022125   23621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:47:14.132989   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298034.110680519
	
	I1007 10:47:14.133013   23621 fix.go:216] guest clock: 1728298034.110680519
	I1007 10:47:14.133022   23621 fix.go:229] Guest: 2024-10-07 10:47:14.110680519 +0000 UTC Remote: 2024-10-07 10:47:14.018221797 +0000 UTC m=+73.371361289 (delta=92.458722ms)
	I1007 10:47:14.133040   23621 fix.go:200] guest clock delta is within tolerance: 92.458722ms
	I1007 10:47:14.133051   23621 start.go:83] releasing machines lock for "ha-406505-m02", held for 26.040206453s
	I1007 10:47:14.133067   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.133299   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:14.135869   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.136305   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.136328   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.139140   23621 out.go:177] * Found network options:
	I1007 10:47:14.140689   23621 out.go:177]   - NO_PROXY=192.168.39.250
	W1007 10:47:14.142083   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:47:14.142129   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.142678   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.142868   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.142974   23621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:47:14.143014   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	W1007 10:47:14.143107   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:47:14.143184   23621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:47:14.143226   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:14.145983   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146148   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146289   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.146315   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146499   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:14.146575   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.146609   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146657   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.146758   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:14.146834   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:14.146877   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.146982   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:14.147039   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:14.147184   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:14.387899   23621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:47:14.394771   23621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:47:14.394848   23621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:47:14.410661   23621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:47:14.410689   23621 start.go:495] detecting cgroup driver to use...
	I1007 10:47:14.410772   23621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:47:14.427868   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:47:14.444153   23621 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:47:14.444206   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:47:14.460223   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:47:14.476365   23621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:47:14.606104   23621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:47:14.745910   23621 docker.go:233] disabling docker service ...
	I1007 10:47:14.745980   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:47:14.760987   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:47:14.774829   23621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:47:14.912287   23621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:47:15.035180   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:47:15.050257   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:47:15.070114   23621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:47:15.070181   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.081232   23621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:47:15.081328   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.097360   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.109085   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.120920   23621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:47:15.132712   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.143857   23621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.162242   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.173052   23621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:47:15.183576   23621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:47:15.183636   23621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:47:15.198592   23621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:47:15.209269   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:47:15.343340   23621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:47:15.435410   23621 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:47:15.435495   23621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:47:15.440650   23621 start.go:563] Will wait 60s for crictl version
	I1007 10:47:15.440716   23621 ssh_runner.go:195] Run: which crictl
	I1007 10:47:15.445010   23621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:47:15.485747   23621 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:47:15.485842   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:47:15.514633   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:47:15.544607   23621 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:47:15.546495   23621 out.go:177]   - env NO_PROXY=192.168.39.250
	I1007 10:47:15.547763   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:15.550503   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:15.550835   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:15.550856   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:15.551135   23621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:47:15.555619   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:47:15.568228   23621 mustload.go:65] Loading cluster: ha-406505
	I1007 10:47:15.568429   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:47:15.568711   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:47:15.568757   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:47:15.583930   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I1007 10:47:15.584453   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:47:15.584977   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:47:15.584999   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:47:15.585308   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:47:15.585449   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:47:15.586928   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:47:15.587242   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:47:15.587291   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:47:15.601672   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I1007 10:47:15.602061   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:47:15.602537   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:47:15.602556   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:47:15.602817   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:47:15.602964   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:47:15.603079   23621 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.37
	I1007 10:47:15.603088   23621 certs.go:194] generating shared ca certs ...
	I1007 10:47:15.603106   23621 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:47:15.603231   23621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:47:15.603292   23621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:47:15.603306   23621 certs.go:256] generating profile certs ...
	I1007 10:47:15.603393   23621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:47:15.603425   23621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39
	I1007 10:47:15.603446   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.254]
	I1007 10:47:15.744161   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39 ...
	I1007 10:47:15.744193   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39: {Name:mkae386a40e79e3b04467f9f82e8cc7ab31669ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:47:15.744370   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39 ...
	I1007 10:47:15.744387   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39: {Name:mkd96b82bea042246d2ff8a9f6d26e46ce2f8d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:47:15.744484   23621 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:47:15.744631   23621 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:47:15.744793   23621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:47:15.744812   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:47:15.744830   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:47:15.744846   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:47:15.744865   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:47:15.744882   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:47:15.744900   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:47:15.744919   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:47:15.744937   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:47:15.745001   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:47:15.745040   23621 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:47:15.745053   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:47:15.745085   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:47:15.745117   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:47:15.745148   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:47:15.745217   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:47:15.745255   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:15.745278   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:47:15.745298   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:47:15.745339   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:47:15.748712   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:15.749114   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:47:15.749137   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:15.749337   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:47:15.749533   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:47:15.749703   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:47:15.749841   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:47:15.828372   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 10:47:15.833129   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 10:47:15.845052   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 10:47:15.849337   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 10:47:15.859666   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 10:47:15.864073   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 10:47:15.882571   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 10:47:15.888480   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1007 10:47:15.901431   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 10:47:15.905968   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 10:47:15.922566   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 10:47:15.927045   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 10:47:15.940895   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:47:15.967974   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:47:15.993940   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:47:16.018147   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:47:16.043434   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 10:47:16.069121   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:47:16.093333   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:47:16.117209   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:47:16.141941   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:47:16.166358   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:47:16.191390   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:47:16.216168   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 10:47:16.233270   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 10:47:16.250510   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 10:47:16.267543   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1007 10:47:16.287073   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 10:47:16.306608   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 10:47:16.324070   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 10:47:16.341221   23621 ssh_runner.go:195] Run: openssl version
	I1007 10:47:16.347150   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:47:16.358131   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:47:16.362824   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:47:16.362874   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:47:16.368599   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:47:16.378927   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:47:16.389775   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:16.394445   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:16.394503   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:16.400151   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:47:16.410835   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:47:16.421451   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:47:16.425954   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:47:16.426044   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:47:16.432023   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:47:16.443765   23621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:47:16.448499   23621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:47:16.448550   23621 kubeadm.go:934] updating node {m02 192.168.39.37 8443 v1.31.1 crio true true} ...
	I1007 10:47:16.448621   23621 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:47:16.448641   23621 kube-vip.go:115] generating kube-vip config ...
	I1007 10:47:16.448674   23621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:47:16.465324   23621 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:47:16.465389   23621 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:47:16.465443   23621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:47:16.476363   23621 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 10:47:16.476434   23621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 10:47:16.487040   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 10:47:16.487085   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:47:16.487142   23621 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1007 10:47:16.487150   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:47:16.487275   23621 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1007 10:47:16.491771   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 10:47:16.491798   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 10:47:17.509026   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:47:17.524363   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:47:17.524452   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:47:17.528672   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 10:47:17.528709   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 10:47:17.599765   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:47:17.599853   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:47:17.612766   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 10:47:17.612810   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 10:47:18.077437   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 10:47:18.088177   23621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 10:47:18.105381   23621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:47:18.122405   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:47:18.142555   23621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:47:18.146470   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:47:18.159594   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:47:18.291092   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:47:18.309170   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:47:18.309657   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:47:18.309712   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:47:18.324913   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I1007 10:47:18.325340   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:47:18.325803   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:47:18.325831   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:47:18.326166   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:47:18.326334   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:47:18.326443   23621 start.go:317] joinCluster: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:47:18.326602   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 10:47:18.326630   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:47:18.329583   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:18.329975   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:47:18.330001   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:18.330140   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:47:18.330306   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:47:18.330451   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:47:18.330595   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:47:18.480055   23621 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:47:18.480129   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hab5tp.p59kud3l77ixefj4 --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m02 --control-plane --apiserver-advertise-address=192.168.39.37 --apiserver-bind-port=8443"
	I1007 10:47:40.053984   23621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hab5tp.p59kud3l77ixefj4 --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m02 --control-plane --apiserver-advertise-address=192.168.39.37 --apiserver-bind-port=8443": (21.573829794s)
	I1007 10:47:40.054022   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 10:47:40.624911   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406505-m02 minikube.k8s.io/updated_at=2024_10_07T10_47_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=ha-406505 minikube.k8s.io/primary=false
	I1007 10:47:40.773203   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-406505-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 10:47:40.895450   23621 start.go:319] duration metric: took 22.569002454s to joinCluster
	I1007 10:47:40.895532   23621 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:47:40.895833   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:47:40.897246   23621 out.go:177] * Verifying Kubernetes components...
	I1007 10:47:40.898575   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:47:41.187385   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:47:41.220775   23621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:47:41.221110   23621 kapi.go:59] client config for ha-406505: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key", CAFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 10:47:41.221195   23621 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.250:8443
	I1007 10:47:41.221469   23621 node_ready.go:35] waiting up to 6m0s for node "ha-406505-m02" to be "Ready" ...
	I1007 10:47:41.221568   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:41.221578   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:41.221589   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:41.221596   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:41.242142   23621 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1007 10:47:41.721789   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:41.721819   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:41.721830   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:41.721836   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:41.725638   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:42.222559   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:42.222582   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:42.222592   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:42.222597   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:42.226807   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:42.722633   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:42.722659   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:42.722670   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:42.722676   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:42.727142   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:43.222278   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:43.222306   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:43.222318   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:43.222325   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:43.225924   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:43.226434   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:43.722388   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:43.722413   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:43.722421   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:43.722426   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:43.726394   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:44.221754   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:44.221782   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:44.221791   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:44.221797   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:44.225377   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:44.722382   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:44.722405   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:44.722415   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:44.722421   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:44.726019   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:45.222002   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:45.222024   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:45.222035   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:45.222042   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:45.228065   23621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 10:47:45.228617   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:45.722139   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:45.722161   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:45.722169   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:45.722174   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:45.726310   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:46.221951   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:46.221984   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:46.221995   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:46.222001   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:46.226108   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:46.722407   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:46.722427   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:46.722434   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:46.722439   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:46.726228   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:47.222433   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:47.222457   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:47.222466   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:47.222471   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:47.226517   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:47.722508   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:47.722532   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:47.722541   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:47.722546   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:47.725944   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:47.726592   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:48.222456   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:48.222477   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:48.222487   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:48.222492   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:48.568208   23621 round_trippers.go:574] Response Status: 200 OK in 345 milliseconds
	I1007 10:47:48.721707   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:48.721729   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:48.721737   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:48.721740   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:48.725191   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:49.222104   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:49.222129   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:49.222137   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:49.222142   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:49.226421   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:49.722572   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:49.722597   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:49.722606   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:49.722610   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:49.726213   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:49.726960   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:50.222350   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:50.222373   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:50.222381   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:50.222384   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:50.226118   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:50.722605   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:50.722631   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:50.722640   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:50.722645   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:50.726160   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:51.221666   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:51.221694   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:51.221714   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:51.221721   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:51.225253   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:51.722133   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:51.722158   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:51.722167   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:51.722171   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:51.725645   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:52.221757   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:52.221780   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:52.221787   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:52.221792   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:52.226043   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:52.226536   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:52.721878   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:52.721905   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:52.721913   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:52.721917   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:52.725379   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:53.221755   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:53.221777   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:53.221786   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:53.221789   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:53.225585   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:53.721883   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:53.721908   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:53.721920   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:53.721925   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:53.725474   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:54.221694   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:54.221720   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:54.221731   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:54.221737   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:54.225868   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:54.226748   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:54.722061   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:54.722086   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:54.722094   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:54.722099   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:54.725979   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:55.221978   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:55.222010   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:55.222019   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:55.222022   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:55.225724   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:55.721884   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:55.721911   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:55.721924   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:55.721931   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:55.726067   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:56.222572   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:56.222595   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:56.222603   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:56.222606   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:56.227082   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:56.227824   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:56.722293   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:56.722317   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:56.722325   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:56.722329   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:56.726068   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:57.222438   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:57.222461   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:57.222469   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:57.222478   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:57.226913   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:57.722050   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:57.722075   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:57.722083   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:57.722087   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:57.726100   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:58.222538   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:58.222560   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:58.222568   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:58.222572   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:58.227033   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:58.722681   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:58.722703   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:58.722711   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:58.722717   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:58.725986   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:58.726597   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:59.221983   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:59.222007   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:59.222015   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:59.222018   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:59.225585   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:59.722632   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:59.722658   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:59.722668   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:59.722672   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:59.726213   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.222316   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:00.222339   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.222347   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.222351   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.225920   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.722449   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:00.722475   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.722484   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.722488   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.725827   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.726434   23621 node_ready.go:49] node "ha-406505-m02" has status "Ready":"True"
	I1007 10:48:00.726454   23621 node_ready.go:38] duration metric: took 19.504967744s for node "ha-406505-m02" to be "Ready" ...
	I1007 10:48:00.726462   23621 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:48:00.726536   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:00.726548   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.726555   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.726559   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.731138   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:00.737911   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.737985   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghmwd
	I1007 10:48:00.737993   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.738001   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.738005   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.741209   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.742237   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:00.742253   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.742260   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.742265   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.745097   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.745537   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.745556   23621 pod_ready.go:82] duration metric: took 7.621102ms for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.745565   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.745629   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xzc88
	I1007 10:48:00.745638   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.745645   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.745650   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.748174   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.748906   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:00.748922   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.748930   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.748936   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.751224   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.751710   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.751731   23621 pod_ready.go:82] duration metric: took 6.159383ms for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.751740   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.751799   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505
	I1007 10:48:00.751809   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.751816   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.751820   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.755074   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.755602   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:00.755617   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.755625   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.755629   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.758258   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.758840   23621 pod_ready.go:93] pod "etcd-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.758864   23621 pod_ready.go:82] duration metric: took 7.117967ms for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.758875   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.758941   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m02
	I1007 10:48:00.758951   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.758962   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.758969   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.761946   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.762531   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:00.762545   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.762555   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.762563   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.765249   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.765990   23621 pod_ready.go:93] pod "etcd-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.766010   23621 pod_ready.go:82] duration metric: took 7.127993ms for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.766024   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.923419   23621 request.go:632] Waited for 157.329652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:48:00.923504   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:48:00.923514   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.923521   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.923526   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.926903   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:01.122872   23621 request.go:632] Waited for 195.370343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.122996   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.123006   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.123014   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.123018   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.126358   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:01.127128   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:01.127149   23621 pod_ready.go:82] duration metric: took 361.118588ms for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.127159   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.322514   23621 request.go:632] Waited for 195.261429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:48:01.322571   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:48:01.322577   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.322584   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.322589   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.326760   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:01.523038   23621 request.go:632] Waited for 195.412644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:01.523093   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:01.523098   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.523105   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.523109   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.527065   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:01.527580   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:01.527599   23621 pod_ready.go:82] duration metric: took 400.432673ms for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.527611   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.722806   23621 request.go:632] Waited for 195.048611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:48:01.722880   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:48:01.722888   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.722898   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.722904   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.727096   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:01.923348   23621 request.go:632] Waited for 195.373775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.923440   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.923452   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.923463   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.923469   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.927522   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:01.927961   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:01.927977   23621 pod_ready.go:82] duration metric: took 400.359633ms for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.928001   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.123092   23621 request.go:632] Waited for 195.004556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:48:02.123150   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:48:02.123157   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.123164   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.123167   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.127404   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:02.323429   23621 request.go:632] Waited for 195.351342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.323503   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.323511   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.323522   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.323532   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.326657   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:02.327382   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:02.327399   23621 pod_ready.go:82] duration metric: took 399.387331ms for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.327409   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.522522   23621 request.go:632] Waited for 195.05566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:48:02.522601   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:48:02.522607   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.522615   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.522620   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.526624   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:02.722785   23621 request.go:632] Waited for 195.392665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.722866   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.722874   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.722885   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.722889   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.726617   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:02.727143   23621 pod_ready.go:93] pod "kube-proxy-6ng4z" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:02.727160   23621 pod_ready.go:82] duration metric: took 399.745226ms for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.727169   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.923398   23621 request.go:632] Waited for 196.154565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:48:02.923464   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:48:02.923473   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.923484   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.923492   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.926698   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.122834   23621 request.go:632] Waited for 195.347405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.122890   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.122897   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.122905   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.122909   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.126570   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.127726   23621 pod_ready.go:93] pod "kube-proxy-nlnhf" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:03.127745   23621 pod_ready.go:82] duration metric: took 400.569818ms for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.127759   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.322923   23621 request.go:632] Waited for 195.092944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:48:03.322991   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:48:03.322997   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.323004   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.323009   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.326336   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.523252   23621 request.go:632] Waited for 196.355286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.523323   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.523328   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.523336   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.523344   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.526876   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.527478   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:03.527506   23621 pod_ready.go:82] duration metric: took 399.737789ms for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.527518   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.722433   23621 request.go:632] Waited for 194.843724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:48:03.722510   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:48:03.722516   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.722524   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.722534   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.726261   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.923306   23621 request.go:632] Waited for 196.357784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:03.923362   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:03.923368   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.923375   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.923379   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.927011   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.927578   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:03.927594   23621 pod_ready.go:82] duration metric: took 400.068935ms for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.927605   23621 pod_ready.go:39] duration metric: took 3.201132108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:48:03.927618   23621 api_server.go:52] waiting for apiserver process to appear ...
	I1007 10:48:03.927663   23621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 10:48:03.942605   23621 api_server.go:72] duration metric: took 23.047005374s to wait for apiserver process to appear ...
	I1007 10:48:03.942635   23621 api_server.go:88] waiting for apiserver healthz status ...
	I1007 10:48:03.942653   23621 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I1007 10:48:03.947020   23621 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I1007 10:48:03.947103   23621 round_trippers.go:463] GET https://192.168.39.250:8443/version
	I1007 10:48:03.947113   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.947126   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.947134   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.948044   23621 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 10:48:03.948143   23621 api_server.go:141] control plane version: v1.31.1
	I1007 10:48:03.948169   23621 api_server.go:131] duration metric: took 5.525857ms to wait for apiserver health ...
	I1007 10:48:03.948178   23621 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 10:48:04.122494   23621 request.go:632] Waited for 174.227541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.122548   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.122554   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.122561   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.122565   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.127425   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:04.131821   23621 system_pods.go:59] 17 kube-system pods found
	I1007 10:48:04.131853   23621 system_pods.go:61] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:48:04.131860   23621 system_pods.go:61] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:48:04.131867   23621 system_pods.go:61] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:48:04.131873   23621 system_pods.go:61] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:48:04.131878   23621 system_pods.go:61] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:48:04.131884   23621 system_pods.go:61] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:48:04.131889   23621 system_pods.go:61] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:48:04.131893   23621 system_pods.go:61] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:48:04.131898   23621 system_pods.go:61] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:48:04.131903   23621 system_pods.go:61] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:48:04.131908   23621 system_pods.go:61] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:48:04.131914   23621 system_pods.go:61] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:48:04.131919   23621 system_pods.go:61] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:48:04.131925   23621 system_pods.go:61] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:48:04.131932   23621 system_pods.go:61] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:48:04.131939   23621 system_pods.go:61] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:48:04.131945   23621 system_pods.go:61] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:48:04.131956   23621 system_pods.go:74] duration metric: took 183.770827ms to wait for pod list to return data ...
	I1007 10:48:04.131966   23621 default_sa.go:34] waiting for default service account to be created ...
	I1007 10:48:04.323406   23621 request.go:632] Waited for 191.335119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:48:04.323466   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:48:04.323474   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.323485   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.323491   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.326946   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:04.327172   23621 default_sa.go:45] found service account: "default"
	I1007 10:48:04.327188   23621 default_sa.go:55] duration metric: took 195.21627ms for default service account to be created ...
	I1007 10:48:04.327195   23621 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 10:48:04.522586   23621 request.go:632] Waited for 195.315471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.522647   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.522653   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.522661   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.522664   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.527722   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:48:04.532291   23621 system_pods.go:86] 17 kube-system pods found
	I1007 10:48:04.532319   23621 system_pods.go:89] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:48:04.532328   23621 system_pods.go:89] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:48:04.532333   23621 system_pods.go:89] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:48:04.532338   23621 system_pods.go:89] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:48:04.532345   23621 system_pods.go:89] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:48:04.532350   23621 system_pods.go:89] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:48:04.532356   23621 system_pods.go:89] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:48:04.532362   23621 system_pods.go:89] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:48:04.532370   23621 system_pods.go:89] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:48:04.532380   23621 system_pods.go:89] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:48:04.532386   23621 system_pods.go:89] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:48:04.532395   23621 system_pods.go:89] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:48:04.532401   23621 system_pods.go:89] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:48:04.532409   23621 system_pods.go:89] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:48:04.532415   23621 system_pods.go:89] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:48:04.532422   23621 system_pods.go:89] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:48:04.532426   23621 system_pods.go:89] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:48:04.532436   23621 system_pods.go:126] duration metric: took 205.234668ms to wait for k8s-apps to be running ...
	I1007 10:48:04.532449   23621 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 10:48:04.532504   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:48:04.548000   23621 system_svc.go:56] duration metric: took 15.524581ms WaitForService to wait for kubelet
	I1007 10:48:04.548032   23621 kubeadm.go:582] duration metric: took 23.652436292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:48:04.548054   23621 node_conditions.go:102] verifying NodePressure condition ...
	I1007 10:48:04.723508   23621 request.go:632] Waited for 175.357529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes
	I1007 10:48:04.723563   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes
	I1007 10:48:04.723568   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.723576   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.723585   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.728067   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:04.728956   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:48:04.728985   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:48:04.728999   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:48:04.729004   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:48:04.729010   23621 node_conditions.go:105] duration metric: took 180.950188ms to run NodePressure ...
	I1007 10:48:04.729032   23621 start.go:241] waiting for startup goroutines ...
	I1007 10:48:04.729064   23621 start.go:255] writing updated cluster config ...
	I1007 10:48:04.731245   23621 out.go:201] 
	I1007 10:48:04.732721   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:48:04.732820   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:48:04.734501   23621 out.go:177] * Starting "ha-406505-m03" control-plane node in "ha-406505" cluster
	I1007 10:48:04.735780   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:48:04.735806   23621 cache.go:56] Caching tarball of preloaded images
	I1007 10:48:04.735908   23621 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:48:04.735925   23621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:48:04.736053   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:48:04.736293   23621 start.go:360] acquireMachinesLock for ha-406505-m03: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:48:04.736354   23621 start.go:364] duration metric: took 34.69µs to acquireMachinesLock for "ha-406505-m03"
	I1007 10:48:04.736376   23621 start.go:93] Provisioning new machine with config: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:48:04.736511   23621 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1007 10:48:04.738190   23621 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 10:48:04.738285   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:04.738332   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:04.754047   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32911
	I1007 10:48:04.754525   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:04.754992   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:04.755012   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:04.755365   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:04.755518   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:04.755655   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:04.755786   23621 start.go:159] libmachine.API.Create for "ha-406505" (driver="kvm2")
	I1007 10:48:04.755817   23621 client.go:168] LocalClient.Create starting
	I1007 10:48:04.755857   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:48:04.755899   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:48:04.755923   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:48:04.755968   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:48:04.755997   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:48:04.756011   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:48:04.756031   23621 main.go:141] libmachine: Running pre-create checks...
	I1007 10:48:04.756042   23621 main.go:141] libmachine: (ha-406505-m03) Calling .PreCreateCheck
	I1007 10:48:04.756216   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetConfigRaw
	I1007 10:48:04.756599   23621 main.go:141] libmachine: Creating machine...
	I1007 10:48:04.756611   23621 main.go:141] libmachine: (ha-406505-m03) Calling .Create
	I1007 10:48:04.756765   23621 main.go:141] libmachine: (ha-406505-m03) Creating KVM machine...
	I1007 10:48:04.757963   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found existing default KVM network
	I1007 10:48:04.758099   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found existing private KVM network mk-ha-406505
	I1007 10:48:04.758232   23621 main.go:141] libmachine: (ha-406505-m03) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03 ...
	I1007 10:48:04.758273   23621 main.go:141] libmachine: (ha-406505-m03) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:48:04.758345   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:04.758258   24407 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:48:04.758425   23621 main.go:141] libmachine: (ha-406505-m03) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:48:05.006754   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:05.006635   24407 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa...
	I1007 10:48:05.394400   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:05.394253   24407 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/ha-406505-m03.rawdisk...
	I1007 10:48:05.394429   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Writing magic tar header
	I1007 10:48:05.394439   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Writing SSH key tar header
	I1007 10:48:05.394459   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:05.394362   24407 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03 ...
	I1007 10:48:05.394475   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03
	I1007 10:48:05.394502   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:48:05.394516   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03 (perms=drwx------)
	I1007 10:48:05.394522   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:48:05.394535   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:48:05.394541   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:48:05.394550   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:48:05.394560   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:48:05.394571   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:48:05.394584   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:48:05.394597   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:48:05.394606   23621 main.go:141] libmachine: (ha-406505-m03) Creating domain...
	I1007 10:48:05.394611   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:48:05.394619   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home
	I1007 10:48:05.394623   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Skipping /home - not owner
	I1007 10:48:05.395724   23621 main.go:141] libmachine: (ha-406505-m03) define libvirt domain using xml: 
	I1007 10:48:05.395761   23621 main.go:141] libmachine: (ha-406505-m03) <domain type='kvm'>
	I1007 10:48:05.395773   23621 main.go:141] libmachine: (ha-406505-m03)   <name>ha-406505-m03</name>
	I1007 10:48:05.395784   23621 main.go:141] libmachine: (ha-406505-m03)   <memory unit='MiB'>2200</memory>
	I1007 10:48:05.395793   23621 main.go:141] libmachine: (ha-406505-m03)   <vcpu>2</vcpu>
	I1007 10:48:05.395802   23621 main.go:141] libmachine: (ha-406505-m03)   <features>
	I1007 10:48:05.395809   23621 main.go:141] libmachine: (ha-406505-m03)     <acpi/>
	I1007 10:48:05.395818   23621 main.go:141] libmachine: (ha-406505-m03)     <apic/>
	I1007 10:48:05.395827   23621 main.go:141] libmachine: (ha-406505-m03)     <pae/>
	I1007 10:48:05.395836   23621 main.go:141] libmachine: (ha-406505-m03)     
	I1007 10:48:05.395844   23621 main.go:141] libmachine: (ha-406505-m03)   </features>
	I1007 10:48:05.395854   23621 main.go:141] libmachine: (ha-406505-m03)   <cpu mode='host-passthrough'>
	I1007 10:48:05.395884   23621 main.go:141] libmachine: (ha-406505-m03)   
	I1007 10:48:05.395909   23621 main.go:141] libmachine: (ha-406505-m03)   </cpu>
	I1007 10:48:05.395940   23621 main.go:141] libmachine: (ha-406505-m03)   <os>
	I1007 10:48:05.395963   23621 main.go:141] libmachine: (ha-406505-m03)     <type>hvm</type>
	I1007 10:48:05.395977   23621 main.go:141] libmachine: (ha-406505-m03)     <boot dev='cdrom'/>
	I1007 10:48:05.396000   23621 main.go:141] libmachine: (ha-406505-m03)     <boot dev='hd'/>
	I1007 10:48:05.396019   23621 main.go:141] libmachine: (ha-406505-m03)     <bootmenu enable='no'/>
	I1007 10:48:05.396035   23621 main.go:141] libmachine: (ha-406505-m03)   </os>
	I1007 10:48:05.396063   23621 main.go:141] libmachine: (ha-406505-m03)   <devices>
	I1007 10:48:05.396094   23621 main.go:141] libmachine: (ha-406505-m03)     <disk type='file' device='cdrom'>
	I1007 10:48:05.396113   23621 main.go:141] libmachine: (ha-406505-m03)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/boot2docker.iso'/>
	I1007 10:48:05.396125   23621 main.go:141] libmachine: (ha-406505-m03)       <target dev='hdc' bus='scsi'/>
	I1007 10:48:05.396137   23621 main.go:141] libmachine: (ha-406505-m03)       <readonly/>
	I1007 10:48:05.396147   23621 main.go:141] libmachine: (ha-406505-m03)     </disk>
	I1007 10:48:05.396159   23621 main.go:141] libmachine: (ha-406505-m03)     <disk type='file' device='disk'>
	I1007 10:48:05.396176   23621 main.go:141] libmachine: (ha-406505-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:48:05.396192   23621 main.go:141] libmachine: (ha-406505-m03)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/ha-406505-m03.rawdisk'/>
	I1007 10:48:05.396207   23621 main.go:141] libmachine: (ha-406505-m03)       <target dev='hda' bus='virtio'/>
	I1007 10:48:05.396219   23621 main.go:141] libmachine: (ha-406505-m03)     </disk>
	I1007 10:48:05.396231   23621 main.go:141] libmachine: (ha-406505-m03)     <interface type='network'>
	I1007 10:48:05.396243   23621 main.go:141] libmachine: (ha-406505-m03)       <source network='mk-ha-406505'/>
	I1007 10:48:05.396258   23621 main.go:141] libmachine: (ha-406505-m03)       <model type='virtio'/>
	I1007 10:48:05.396270   23621 main.go:141] libmachine: (ha-406505-m03)     </interface>
	I1007 10:48:05.396280   23621 main.go:141] libmachine: (ha-406505-m03)     <interface type='network'>
	I1007 10:48:05.396290   23621 main.go:141] libmachine: (ha-406505-m03)       <source network='default'/>
	I1007 10:48:05.396300   23621 main.go:141] libmachine: (ha-406505-m03)       <model type='virtio'/>
	I1007 10:48:05.396309   23621 main.go:141] libmachine: (ha-406505-m03)     </interface>
	I1007 10:48:05.396320   23621 main.go:141] libmachine: (ha-406505-m03)     <serial type='pty'>
	I1007 10:48:05.396337   23621 main.go:141] libmachine: (ha-406505-m03)       <target port='0'/>
	I1007 10:48:05.396351   23621 main.go:141] libmachine: (ha-406505-m03)     </serial>
	I1007 10:48:05.396362   23621 main.go:141] libmachine: (ha-406505-m03)     <console type='pty'>
	I1007 10:48:05.396372   23621 main.go:141] libmachine: (ha-406505-m03)       <target type='serial' port='0'/>
	I1007 10:48:05.396382   23621 main.go:141] libmachine: (ha-406505-m03)     </console>
	I1007 10:48:05.396391   23621 main.go:141] libmachine: (ha-406505-m03)     <rng model='virtio'>
	I1007 10:48:05.396401   23621 main.go:141] libmachine: (ha-406505-m03)       <backend model='random'>/dev/random</backend>
	I1007 10:48:05.396411   23621 main.go:141] libmachine: (ha-406505-m03)     </rng>
	I1007 10:48:05.396418   23621 main.go:141] libmachine: (ha-406505-m03)     
	I1007 10:48:05.396427   23621 main.go:141] libmachine: (ha-406505-m03)     
	I1007 10:48:05.396436   23621 main.go:141] libmachine: (ha-406505-m03)   </devices>
	I1007 10:48:05.396454   23621 main.go:141] libmachine: (ha-406505-m03) </domain>
	I1007 10:48:05.396464   23621 main.go:141] libmachine: (ha-406505-m03) 
	I1007 10:48:05.403522   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:af:df:35 in network default
	I1007 10:48:05.404128   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:05.404146   23621 main.go:141] libmachine: (ha-406505-m03) Ensuring networks are active...
	I1007 10:48:05.404936   23621 main.go:141] libmachine: (ha-406505-m03) Ensuring network default is active
	I1007 10:48:05.405208   23621 main.go:141] libmachine: (ha-406505-m03) Ensuring network mk-ha-406505 is active
	I1007 10:48:05.405622   23621 main.go:141] libmachine: (ha-406505-m03) Getting domain xml...
	I1007 10:48:05.406377   23621 main.go:141] libmachine: (ha-406505-m03) Creating domain...
	I1007 10:48:06.663273   23621 main.go:141] libmachine: (ha-406505-m03) Waiting to get IP...
	I1007 10:48:06.664152   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:06.664559   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:06.664583   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:06.664538   24407 retry.go:31] will retry after 215.584214ms: waiting for machine to come up
	I1007 10:48:06.882094   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:06.882713   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:06.882744   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:06.882654   24407 retry.go:31] will retry after 346.060218ms: waiting for machine to come up
	I1007 10:48:07.229850   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:07.230332   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:07.230440   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:07.230280   24407 retry.go:31] will retry after 442.798208ms: waiting for machine to come up
	I1007 10:48:07.675076   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:07.675596   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:07.675620   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:07.675547   24407 retry.go:31] will retry after 562.649906ms: waiting for machine to come up
	I1007 10:48:08.240324   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:08.240767   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:08.240800   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:08.240736   24407 retry.go:31] will retry after 482.878877ms: waiting for machine to come up
	I1007 10:48:08.725445   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:08.725807   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:08.725869   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:08.725755   24407 retry.go:31] will retry after 616.205186ms: waiting for machine to come up
	I1007 10:48:09.343485   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:09.343941   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:09.344003   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:09.343909   24407 retry.go:31] will retry after 1.040138153s: waiting for machine to come up
	I1007 10:48:10.386253   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:10.386682   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:10.386713   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:10.386637   24407 retry.go:31] will retry after 1.418753496s: waiting for machine to come up
	I1007 10:48:11.807040   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:11.807484   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:11.807521   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:11.807425   24407 retry.go:31] will retry after 1.535016663s: waiting for machine to come up
	I1007 10:48:13.343720   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:13.344267   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:13.344302   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:13.344197   24407 retry.go:31] will retry after 1.769880509s: waiting for machine to come up
	I1007 10:48:15.115316   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:15.115817   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:15.115850   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:15.115759   24407 retry.go:31] will retry after 2.49899664s: waiting for machine to come up
	I1007 10:48:17.617100   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:17.617680   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:17.617710   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:17.617615   24407 retry.go:31] will retry after 2.794854441s: waiting for machine to come up
	I1007 10:48:20.413842   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:20.414235   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:20.414299   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:20.414227   24407 retry.go:31] will retry after 2.870258619s: waiting for machine to come up
	I1007 10:48:23.285865   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:23.286247   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:23.286273   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:23.286205   24407 retry.go:31] will retry after 5.059515205s: waiting for machine to come up
	I1007 10:48:28.350184   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:28.350662   23621 main.go:141] libmachine: (ha-406505-m03) Found IP for machine: 192.168.39.102
	I1007 10:48:28.350688   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has current primary IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:28.350700   23621 main.go:141] libmachine: (ha-406505-m03) Reserving static IP address...
	I1007 10:48:28.351065   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find host DHCP lease matching {name: "ha-406505-m03", mac: "52:54:00:7e:e4:e0", ip: "192.168.39.102"} in network mk-ha-406505
	I1007 10:48:28.431618   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Getting to WaitForSSH function...
	I1007 10:48:28.431646   23621 main.go:141] libmachine: (ha-406505-m03) Reserved static IP address: 192.168.39.102
	I1007 10:48:28.431659   23621 main.go:141] libmachine: (ha-406505-m03) Waiting for SSH to be available...
	I1007 10:48:28.434458   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:28.434796   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505
	I1007 10:48:28.434824   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find defined IP address of network mk-ha-406505 interface with MAC address 52:54:00:7e:e4:e0
	I1007 10:48:28.434975   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH client type: external
	I1007 10:48:28.435007   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa (-rw-------)
	I1007 10:48:28.435035   23621 main.go:141] libmachine: (ha-406505-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:48:28.435054   23621 main.go:141] libmachine: (ha-406505-m03) DBG | About to run SSH command:
	I1007 10:48:28.435085   23621 main.go:141] libmachine: (ha-406505-m03) DBG | exit 0
	I1007 10:48:28.439710   23621 main.go:141] libmachine: (ha-406505-m03) DBG | SSH cmd err, output: exit status 255: 
	I1007 10:48:28.439737   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 10:48:28.439768   23621 main.go:141] libmachine: (ha-406505-m03) DBG | command : exit 0
	I1007 10:48:28.439798   23621 main.go:141] libmachine: (ha-406505-m03) DBG | err     : exit status 255
	I1007 10:48:28.439811   23621 main.go:141] libmachine: (ha-406505-m03) DBG | output  : 
	I1007 10:48:31.440230   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Getting to WaitForSSH function...
	I1007 10:48:31.442839   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.443280   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.443311   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.443446   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH client type: external
	I1007 10:48:31.443482   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa (-rw-------)
	I1007 10:48:31.443520   23621 main.go:141] libmachine: (ha-406505-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:48:31.443544   23621 main.go:141] libmachine: (ha-406505-m03) DBG | About to run SSH command:
	I1007 10:48:31.443556   23621 main.go:141] libmachine: (ha-406505-m03) DBG | exit 0
	I1007 10:48:31.568683   23621 main.go:141] libmachine: (ha-406505-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 10:48:31.568948   23621 main.go:141] libmachine: (ha-406505-m03) KVM machine creation complete!
	I1007 10:48:31.569279   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetConfigRaw
	I1007 10:48:31.569953   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:31.570177   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:31.570345   23621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:48:31.570360   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetState
	I1007 10:48:31.571674   23621 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:48:31.571686   23621 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:48:31.571691   23621 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:48:31.571696   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.574360   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.574751   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.574773   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.574972   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.575161   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.575318   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.575453   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.575630   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.575886   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.575901   23621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:48:31.679615   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:48:31.679639   23621 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:48:31.679646   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.682574   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.682919   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.682944   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.683116   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.683308   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.683480   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.683605   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.683787   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.683977   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.684002   23621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:48:31.789204   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:48:31.789302   23621 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:48:31.789319   23621 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:48:31.789332   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:31.789607   23621 buildroot.go:166] provisioning hostname "ha-406505-m03"
	I1007 10:48:31.789633   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:31.789836   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.792541   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.792898   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.792925   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.793077   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.793430   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.793697   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.793864   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.794038   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.794203   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.794220   23621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505-m03 && echo "ha-406505-m03" | sudo tee /etc/hostname
	I1007 10:48:31.915086   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505-m03
	
	I1007 10:48:31.915117   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.918064   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.918448   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.918486   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.918647   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.918833   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.918992   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.919119   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.919284   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.919488   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.919532   23621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:48:32.033622   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:48:32.033656   23621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:48:32.033671   23621 buildroot.go:174] setting up certificates
	I1007 10:48:32.033679   23621 provision.go:84] configureAuth start
	I1007 10:48:32.033688   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:32.034012   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:32.037059   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.037482   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.037516   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.037674   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.040020   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.040373   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.040394   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.040541   23621 provision.go:143] copyHostCerts
	I1007 10:48:32.040567   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:48:32.040595   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:48:32.040603   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:48:32.040668   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:48:32.040738   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:48:32.040754   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:48:32.040761   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:48:32.040784   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:48:32.040824   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:48:32.040840   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:48:32.040846   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:48:32.040866   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:48:32.040911   23621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505-m03 san=[127.0.0.1 192.168.39.102 ha-406505-m03 localhost minikube]
	I1007 10:48:32.221278   23621 provision.go:177] copyRemoteCerts
	I1007 10:48:32.221329   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:48:32.221355   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.224264   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.224745   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.224771   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.224993   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.225158   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.225327   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.225465   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:32.308320   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:48:32.308394   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:48:32.337349   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:48:32.337427   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 10:48:32.362724   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:48:32.362808   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:48:32.388055   23621 provision.go:87] duration metric: took 354.362269ms to configureAuth
	I1007 10:48:32.388097   23621 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:48:32.388337   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:48:32.388417   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.391464   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.391888   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.391916   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.392130   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.392314   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.392419   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.392546   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.392731   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:32.392934   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:32.392957   23621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:48:32.625746   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:48:32.625778   23621 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:48:32.625788   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetURL
	I1007 10:48:32.627033   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using libvirt version 6000000
	I1007 10:48:32.629153   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.629483   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.629535   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.629659   23621 main.go:141] libmachine: Docker is up and running!
	I1007 10:48:32.629673   23621 main.go:141] libmachine: Reticulating splines...
	I1007 10:48:32.629679   23621 client.go:171] duration metric: took 27.87385173s to LocalClient.Create
	I1007 10:48:32.629697   23621 start.go:167] duration metric: took 27.873912748s to libmachine.API.Create "ha-406505"
	I1007 10:48:32.629707   23621 start.go:293] postStartSetup for "ha-406505-m03" (driver="kvm2")
	I1007 10:48:32.629716   23621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:48:32.629732   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.629961   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:48:32.629987   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.632229   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.632615   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.632638   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.632778   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.632953   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.633107   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.633255   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:32.719017   23621 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:48:32.723755   23621 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:48:32.723780   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:48:32.723839   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:48:32.723945   23621 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:48:32.723957   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:48:32.724071   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:48:32.734023   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:48:32.759071   23621 start.go:296] duration metric: took 129.349571ms for postStartSetup
	I1007 10:48:32.759128   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetConfigRaw
	I1007 10:48:32.759727   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:32.762372   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.762794   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.762825   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.763105   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:48:32.763346   23621 start.go:128] duration metric: took 28.026823197s to createHost
	I1007 10:48:32.763370   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.765734   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.766060   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.766091   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.766305   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.766467   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.766612   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.766764   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.766903   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:32.767070   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:32.767079   23621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:48:32.873753   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298112.851911112
	
	I1007 10:48:32.873779   23621 fix.go:216] guest clock: 1728298112.851911112
	I1007 10:48:32.873789   23621 fix.go:229] Guest: 2024-10-07 10:48:32.851911112 +0000 UTC Remote: 2024-10-07 10:48:32.763358943 +0000 UTC m=+152.116498435 (delta=88.552169ms)
	I1007 10:48:32.873808   23621 fix.go:200] guest clock delta is within tolerance: 88.552169ms
	I1007 10:48:32.873815   23621 start.go:83] releasing machines lock for "ha-406505-m03", held for 28.137449792s
	I1007 10:48:32.873834   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.874113   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:32.877249   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.877618   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.877659   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.879531   23621 out.go:177] * Found network options:
	I1007 10:48:32.880848   23621 out.go:177]   - NO_PROXY=192.168.39.250,192.168.39.37
	W1007 10:48:32.882090   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 10:48:32.882109   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:48:32.882124   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.882710   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.882882   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.882980   23621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:48:32.883020   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	W1007 10:48:32.883028   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 10:48:32.883048   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:48:32.883114   23621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:48:32.883136   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.885892   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886191   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886254   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.886279   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886434   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.886593   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.886690   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.886721   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886723   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.886891   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.886927   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:32.887008   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.887172   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.887336   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:33.125827   23621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:48:33.132836   23621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:48:33.132914   23621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:48:33.152264   23621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:48:33.152289   23621 start.go:495] detecting cgroup driver to use...
	I1007 10:48:33.152363   23621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:48:33.172642   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:48:33.190770   23621 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:48:33.190848   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:48:33.206401   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:48:33.222941   23621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:48:33.363133   23621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:48:33.526409   23621 docker.go:233] disabling docker service ...
	I1007 10:48:33.526475   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:48:33.542837   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:48:33.557673   23621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:48:33.715377   23621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:48:33.847470   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:48:33.862560   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:48:33.884061   23621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:48:33.884116   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.897298   23621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:48:33.897363   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.909096   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.921064   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.932787   23621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:48:33.944724   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.956149   23621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.976708   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.988978   23621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:48:33.999874   23621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:48:33.999940   23621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:48:34.015557   23621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:48:34.026499   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:48:34.149992   23621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:48:34.251227   23621 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:48:34.251293   23621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:48:34.256863   23621 start.go:563] Will wait 60s for crictl version
	I1007 10:48:34.256915   23621 ssh_runner.go:195] Run: which crictl
	I1007 10:48:34.260970   23621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:48:34.301659   23621 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:48:34.301747   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:48:34.332633   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:48:34.367466   23621 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:48:34.369001   23621 out.go:177]   - env NO_PROXY=192.168.39.250
	I1007 10:48:34.370423   23621 out.go:177]   - env NO_PROXY=192.168.39.250,192.168.39.37
	I1007 10:48:34.371711   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:34.374438   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:34.374867   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:34.374897   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:34.375117   23621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:48:34.379896   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:48:34.393502   23621 mustload.go:65] Loading cluster: ha-406505
	I1007 10:48:34.393757   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:48:34.394025   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:34.394061   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:34.411296   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38513
	I1007 10:48:34.411826   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:34.412384   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:34.412408   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:34.412720   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:34.412914   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:48:34.414711   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:48:34.415007   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:34.415055   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:34.431721   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34665
	I1007 10:48:34.432227   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:34.432721   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:34.432743   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:34.433085   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:34.433286   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:48:34.433443   23621 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.102
	I1007 10:48:34.433455   23621 certs.go:194] generating shared ca certs ...
	I1007 10:48:34.433473   23621 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:48:34.433653   23621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:48:34.433694   23621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:48:34.433704   23621 certs.go:256] generating profile certs ...
	I1007 10:48:34.433769   23621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:48:34.433796   23621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af
	I1007 10:48:34.433810   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.102 192.168.39.254]
	I1007 10:48:34.626802   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af ...
	I1007 10:48:34.626838   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af: {Name:mk4dc5899bb034b35a02970b97ee9a5705168f50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:48:34.627028   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af ...
	I1007 10:48:34.627045   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af: {Name:mk33cc429fb28f1dd32077e7c6736b9265eee4dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:48:34.627160   23621 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:48:34.627332   23621 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:48:34.627505   23621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:48:34.627523   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:48:34.627547   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:48:34.627570   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:48:34.627588   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:48:34.627606   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:48:34.627624   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:48:34.627650   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:48:34.648122   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:48:34.648245   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:48:34.648300   23621 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:48:34.648313   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:48:34.648345   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:48:34.648376   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:48:34.648424   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:48:34.649013   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:48:34.649072   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:48:34.649091   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:34.649106   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:48:34.649154   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:48:34.652851   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:34.653287   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:48:34.653319   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:34.653480   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:48:34.653695   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:48:34.653872   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:48:34.653998   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:48:34.732255   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 10:48:34.739182   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 10:48:34.751245   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 10:48:34.755732   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 10:48:34.766849   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 10:48:34.771581   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 10:48:34.783409   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 10:48:34.788150   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1007 10:48:34.799354   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 10:48:34.804283   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 10:48:34.816354   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 10:48:34.821135   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 10:48:34.834977   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:48:34.863883   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:48:34.896166   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:48:34.926479   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:48:34.954664   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 10:48:34.981371   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 10:48:35.009381   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:48:35.036950   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:48:35.063824   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:48:35.091476   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:48:35.119954   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:48:35.148052   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 10:48:35.166363   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 10:48:35.186175   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 10:48:35.205554   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1007 10:48:35.223002   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 10:48:35.240092   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 10:48:35.256797   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 10:48:35.274939   23621 ssh_runner.go:195] Run: openssl version
	I1007 10:48:35.281362   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:48:35.293636   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:48:35.298579   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:48:35.298639   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:48:35.304753   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:48:35.315888   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:48:35.326832   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:35.331554   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:35.331619   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:35.337434   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:48:35.348665   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:48:35.360023   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:48:35.365259   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:48:35.365338   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:48:35.372821   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:48:35.385592   23621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:48:35.390405   23621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:48:35.390455   23621 kubeadm.go:934] updating node {m03 192.168.39.102 8443 v1.31.1 crio true true} ...
	I1007 10:48:35.390529   23621 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:48:35.390554   23621 kube-vip.go:115] generating kube-vip config ...
	I1007 10:48:35.390588   23621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:48:35.407020   23621 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:48:35.407098   23621 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:48:35.407155   23621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:48:35.417610   23621 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 10:48:35.417677   23621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 10:48:35.428405   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 10:48:35.428437   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:48:35.428436   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1007 10:48:35.428474   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1007 10:48:35.428487   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:48:35.428508   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:48:35.428547   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:48:35.428511   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:48:35.446473   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 10:48:35.446517   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 10:48:35.446544   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 10:48:35.446546   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:48:35.446583   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 10:48:35.446648   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:48:35.470883   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 10:48:35.470927   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 10:48:36.357285   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 10:48:36.367780   23621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 10:48:36.389088   23621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:48:36.406417   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:48:36.424782   23621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:48:36.429051   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:48:36.442669   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:48:36.586820   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:48:36.605650   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:48:36.606095   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:36.606145   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:36.622824   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45931
	I1007 10:48:36.623406   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:36.623956   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:36.624010   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:36.624375   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:36.624602   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:48:36.624756   23621 start.go:317] joinCluster: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:48:36.624906   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 10:48:36.624922   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:48:36.628085   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:36.628498   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:48:36.628533   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:36.628663   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:48:36.628842   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:48:36.628992   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:48:36.629138   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:48:36.794813   23621 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:48:36.794869   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gpv0xr.ao0m8qerz0fls7pl --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m03 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443"
	I1007 10:48:59.856325   23621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gpv0xr.ao0m8qerz0fls7pl --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m03 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443": (23.06138473s)
	I1007 10:48:59.856362   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 10:49:00.490810   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406505-m03 minikube.k8s.io/updated_at=2024_10_07T10_49_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=ha-406505 minikube.k8s.io/primary=false
	I1007 10:49:00.615125   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-406505-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 10:49:00.740706   23621 start.go:319] duration metric: took 24.115945375s to joinCluster
	I1007 10:49:00.740808   23621 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:49:00.741314   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:49:00.742651   23621 out.go:177] * Verifying Kubernetes components...
	I1007 10:49:00.744087   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:49:00.980117   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:49:00.996987   23621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:49:00.997383   23621 kapi.go:59] client config for ha-406505: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key", CAFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 10:49:00.997456   23621 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.250:8443
	I1007 10:49:00.997848   23621 node_ready.go:35] waiting up to 6m0s for node "ha-406505-m03" to be "Ready" ...
	I1007 10:49:00.997952   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:00.997963   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:00.997973   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:00.997980   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:01.002879   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:01.498022   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:01.498047   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:01.498058   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:01.498063   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:01.502144   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:01.998532   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:01.998559   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:01.998571   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:01.998580   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:02.002214   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:02.498080   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:02.498113   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:02.498126   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:02.498132   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:02.502433   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:02.998449   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:02.998474   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:02.998482   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:02.998486   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:03.001753   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:03.002481   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:03.498693   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:03.498717   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:03.498727   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:03.498732   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:03.503726   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:03.998977   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:03.999008   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:03.999019   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:03.999026   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:04.002356   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:04.498338   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:04.498365   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:04.498374   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:04.498379   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:04.502295   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:04.998619   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:04.998645   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:04.998656   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:04.998660   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:05.001641   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:05.498634   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:05.498660   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:05.498671   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:05.498677   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:05.502156   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:05.502885   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:05.998723   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:05.998794   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:05.998812   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:05.998818   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:06.003873   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:49:06.499098   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:06.499119   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:06.499126   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:06.499131   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:06.503089   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:06.998553   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:06.998587   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:06.998595   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:06.998599   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:07.002580   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:07.498710   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:07.498736   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:07.498746   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:07.498751   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:07.502124   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:07.502967   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:07.998236   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:07.998258   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:07.998267   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:07.998271   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:08.001970   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:08.498896   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:08.498918   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:08.498927   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:08.498931   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:08.502697   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:08.998532   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:08.998561   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:08.998571   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:08.998578   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:09.002002   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:09.498039   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:09.498064   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:09.498077   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:09.498084   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:09.502005   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:09.998852   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:09.998879   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:09.998887   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:09.998893   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:10.002735   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:10.003524   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:10.499000   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:10.499026   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:10.499034   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:10.499046   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:10.502792   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:10.998624   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:10.998647   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:10.998659   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:10.998663   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:11.002342   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:11.498150   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:11.498177   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:11.498186   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:11.498193   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:11.502277   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:11.998714   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:11.998735   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:11.998743   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:11.998748   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:12.002263   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:12.498755   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:12.498782   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:12.498794   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:12.498801   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:12.502981   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:12.503718   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:12.999042   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:12.999069   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:12.999079   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:12.999085   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:13.002464   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:13.498077   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:13.498101   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:13.498110   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:13.498115   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:13.501652   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:13.998309   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:13.998332   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:13.998343   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:13.998347   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:14.001704   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:14.498713   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:14.498734   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:14.498742   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:14.498745   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:14.502719   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:14.999025   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:14.999047   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:14.999055   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:14.999059   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:15.002812   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:15.003362   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:15.498817   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:15.498839   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:15.498846   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:15.498850   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:15.504009   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:49:15.998456   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:15.998477   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:15.998485   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:15.998488   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:16.001780   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:16.498830   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:16.498857   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:16.498868   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:16.498873   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:16.502631   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:16.998224   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:16.998257   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:16.998268   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:16.998274   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:17.001615   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:17.498645   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:17.498672   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:17.498684   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:17.498688   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:17.502201   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:17.502837   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:17.998189   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:17.998213   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:17.998220   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:17.998226   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.001816   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.498415   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:18.498450   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.498462   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.498469   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.502015   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.502523   23621 node_ready.go:49] node "ha-406505-m03" has status "Ready":"True"
	I1007 10:49:18.502543   23621 node_ready.go:38] duration metric: took 17.504667395s for node "ha-406505-m03" to be "Ready" ...
	I1007 10:49:18.502551   23621 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:49:18.502632   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:18.502642   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.502650   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.502656   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.509327   23621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 10:49:18.518372   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.518459   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghmwd
	I1007 10:49:18.518464   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.518472   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.518479   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.521616   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.522356   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:18.522371   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.522378   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.522382   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.524976   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.525512   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.525532   23621 pod_ready.go:82] duration metric: took 7.133708ms for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.525541   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.525593   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xzc88
	I1007 10:49:18.525602   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.525608   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.525612   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.528321   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.529035   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:18.529049   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.529055   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.529058   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.531646   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.532124   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.532141   23621 pod_ready.go:82] duration metric: took 6.593928ms for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.532153   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.532225   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505
	I1007 10:49:18.532234   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.532244   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.532249   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.534614   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.535248   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:18.535264   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.535274   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.535279   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.537970   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.538368   23621 pod_ready.go:93] pod "etcd-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.538387   23621 pod_ready.go:82] duration metric: took 6.225816ms for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.538401   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.538461   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m02
	I1007 10:49:18.538472   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.538483   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.538491   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.541748   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.542359   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:18.542377   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.542389   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.542397   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.545668   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.546089   23621 pod_ready.go:93] pod "etcd-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.546104   23621 pod_ready.go:82] duration metric: took 7.695818ms for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.546113   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.698417   23621 request.go:632] Waited for 152.247174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m03
	I1007 10:49:18.698479   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m03
	I1007 10:49:18.698485   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.698492   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.698497   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.702261   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.899482   23621 request.go:632] Waited for 196.389358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:18.899569   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:18.899582   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.899593   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.899603   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.903728   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:18.904256   23621 pod_ready.go:93] pod "etcd-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.904275   23621 pod_ready.go:82] duration metric: took 358.156028ms for pod "etcd-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.904291   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.099454   23621 request.go:632] Waited for 195.101714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:49:19.099547   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:49:19.099559   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.099569   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.099575   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.103611   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:19.298735   23621 request.go:632] Waited for 194.375211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:19.298818   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:19.298825   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.298837   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.298856   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.302548   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:19.303053   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:19.303069   23621 pod_ready.go:82] duration metric: took 398.772541ms for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.303079   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.499176   23621 request.go:632] Waited for 196.018641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:49:19.499270   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:49:19.499283   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.499296   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.499309   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.503085   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:19.699374   23621 request.go:632] Waited for 195.380837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:19.699426   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:19.699432   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.699439   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.699443   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.703099   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:19.703625   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:19.703644   23621 pod_ready.go:82] duration metric: took 400.557163ms for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.703654   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.899212   23621 request.go:632] Waited for 195.494385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m03
	I1007 10:49:19.899266   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m03
	I1007 10:49:19.899271   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.899283   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.899289   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.902896   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.098927   23621 request.go:632] Waited for 195.376619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:20.098987   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:20.098993   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.099000   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.099004   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.102179   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.102740   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:20.102763   23621 pod_ready.go:82] duration metric: took 399.102679ms for pod "kube-apiserver-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.102773   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.298944   23621 request.go:632] Waited for 196.089064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:49:20.299004   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:49:20.299010   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.299017   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.299023   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.302867   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.498409   23621 request.go:632] Waited for 194.294244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:20.498569   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:20.498582   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.498592   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.498599   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.502204   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.503003   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:20.503027   23621 pod_ready.go:82] duration metric: took 400.247835ms for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.503037   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.699318   23621 request.go:632] Waited for 196.218592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:49:20.699394   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:49:20.699405   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.699415   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.699424   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.702950   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.899287   23621 request.go:632] Waited for 195.402635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:20.899343   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:20.899349   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.899370   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.899375   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.903339   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.904141   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:20.904160   23621 pod_ready.go:82] duration metric: took 401.116067ms for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.904170   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.099320   23621 request.go:632] Waited for 195.054621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m03
	I1007 10:49:21.099383   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m03
	I1007 10:49:21.099391   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.099404   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.099415   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.103012   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.299153   23621 request.go:632] Waited for 195.377964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:21.299213   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:21.299218   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.299225   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.299229   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.303015   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.303516   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:21.303534   23621 pod_ready.go:82] duration metric: took 399.355676ms for pod "kube-controller-manager-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.303543   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.498530   23621 request.go:632] Waited for 194.920994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:49:21.498597   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:49:21.498603   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.498610   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.498614   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.502242   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.699351   23621 request.go:632] Waited for 196.362706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:21.699418   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:21.699423   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.699431   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.699435   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.702722   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.703412   23621 pod_ready.go:93] pod "kube-proxy-6ng4z" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:21.703429   23621 pod_ready.go:82] duration metric: took 399.878679ms for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.703439   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c79zf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.898495   23621 request.go:632] Waited for 195.001064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c79zf
	I1007 10:49:21.898570   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c79zf
	I1007 10:49:21.898576   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.898583   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.898587   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.903113   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:22.099311   23621 request.go:632] Waited for 195.352243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:22.099376   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:22.099384   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.099392   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.099397   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.102668   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.103269   23621 pod_ready.go:93] pod "kube-proxy-c79zf" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:22.103284   23621 pod_ready.go:82] duration metric: took 399.838704ms for pod "kube-proxy-c79zf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.103298   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.299438   23621 request.go:632] Waited for 196.048125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:49:22.299517   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:49:22.299528   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.299539   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.299548   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.303349   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.499362   23621 request.go:632] Waited for 195.369323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.499426   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.499434   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.499445   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.499452   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.503812   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:22.504569   23621 pod_ready.go:93] pod "kube-proxy-nlnhf" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:22.504595   23621 pod_ready.go:82] duration metric: took 401.287955ms for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.504608   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.698460   23621 request.go:632] Waited for 193.785531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:49:22.698548   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:49:22.698557   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.698568   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.698578   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.702017   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.898981   23621 request.go:632] Waited for 196.377795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.899067   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.899078   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.899089   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.899095   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.902303   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.903166   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:22.903182   23621 pod_ready.go:82] duration metric: took 398.566323ms for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.903191   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.099385   23621 request.go:632] Waited for 196.133679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:49:23.099448   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:49:23.099455   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.099466   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.099472   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.102786   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.298901   23621 request.go:632] Waited for 195.266193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:23.298979   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:23.299002   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.299017   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.299025   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.302232   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.302790   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:23.302809   23621 pod_ready.go:82] duration metric: took 399.610952ms for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.302821   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.499180   23621 request.go:632] Waited for 196.292359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m03
	I1007 10:49:23.499272   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m03
	I1007 10:49:23.499287   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.499297   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.499301   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.502869   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.699193   23621 request.go:632] Waited for 195.355503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:23.699258   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:23.699265   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.699273   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.699279   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.703084   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.703667   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:23.703685   23621 pod_ready.go:82] duration metric: took 400.856999ms for pod "kube-scheduler-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.703698   23621 pod_ready.go:39] duration metric: took 5.201137337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:49:23.703714   23621 api_server.go:52] waiting for apiserver process to appear ...
	I1007 10:49:23.703771   23621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 10:49:23.720988   23621 api_server.go:72] duration metric: took 22.980139715s to wait for apiserver process to appear ...
	I1007 10:49:23.721017   23621 api_server.go:88] waiting for apiserver healthz status ...
	I1007 10:49:23.721038   23621 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I1007 10:49:23.727765   23621 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I1007 10:49:23.727841   23621 round_trippers.go:463] GET https://192.168.39.250:8443/version
	I1007 10:49:23.727846   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.727855   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.727860   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.728928   23621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1007 10:49:23.729002   23621 api_server.go:141] control plane version: v1.31.1
	I1007 10:49:23.729019   23621 api_server.go:131] duration metric: took 7.995236ms to wait for apiserver health ...
	I1007 10:49:23.729029   23621 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 10:49:23.899405   23621 request.go:632] Waited for 170.304588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:23.899474   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:23.899479   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.899494   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.899501   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.905647   23621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 10:49:23.912018   23621 system_pods.go:59] 24 kube-system pods found
	I1007 10:49:23.912046   23621 system_pods.go:61] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:49:23.912051   23621 system_pods.go:61] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:49:23.912055   23621 system_pods.go:61] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:49:23.912059   23621 system_pods.go:61] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:49:23.912064   23621 system_pods.go:61] "etcd-ha-406505-m03" [2c0079fb-51f1-423c-8b4c-893824342cd6] Running
	I1007 10:49:23.912069   23621 system_pods.go:61] "kindnet-28vpp" [c14e8bdf-ebc5-4349-adb4-6786cd15551d] Running
	I1007 10:49:23.912074   23621 system_pods.go:61] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:49:23.912079   23621 system_pods.go:61] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:49:23.912087   23621 system_pods.go:61] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:49:23.912092   23621 system_pods.go:61] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:49:23.912101   23621 system_pods.go:61] "kube-apiserver-ha-406505-m03" [8bc80684-cd9a-40b1-94e1-02cb77917c36] Running
	I1007 10:49:23.912106   23621 system_pods.go:61] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:49:23.912111   23621 system_pods.go:61] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:49:23.912116   23621 system_pods.go:61] "kube-controller-manager-ha-406505-m03" [ab97ec1a-fb7e-42a5-b77c-721ccf85db1d] Running
	I1007 10:49:23.912120   23621 system_pods.go:61] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:49:23.912123   23621 system_pods.go:61] "kube-proxy-c79zf" [2b12aaa5-9560-459b-a3bb-e45e73a6b663] Running
	I1007 10:49:23.912129   23621 system_pods.go:61] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:49:23.912132   23621 system_pods.go:61] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:49:23.912135   23621 system_pods.go:61] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:49:23.912139   23621 system_pods.go:61] "kube-scheduler-ha-406505-m03" [da8d486f-250a-4961-ac7c-b1435c52a3ca] Running
	I1007 10:49:23.912147   23621 system_pods.go:61] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:49:23.912152   23621 system_pods.go:61] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:49:23.912155   23621 system_pods.go:61] "kube-vip-ha-406505-m03" [a90a6084-73a3-476c-9729-1d8b45c6f3fc] Running
	I1007 10:49:23.912160   23621 system_pods.go:61] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:49:23.912167   23621 system_pods.go:74] duration metric: took 183.129229ms to wait for pod list to return data ...
	I1007 10:49:23.912178   23621 default_sa.go:34] waiting for default service account to be created ...
	I1007 10:49:24.099457   23621 request.go:632] Waited for 187.192356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:49:24.099519   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:49:24.099524   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:24.099532   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:24.099538   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:24.104028   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:24.104180   23621 default_sa.go:45] found service account: "default"
	I1007 10:49:24.104202   23621 default_sa.go:55] duration metric: took 192.014074ms for default service account to be created ...
	I1007 10:49:24.104214   23621 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 10:49:24.299461   23621 request.go:632] Waited for 195.156179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:24.299513   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:24.299518   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:24.299525   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:24.299530   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:24.305308   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:49:24.311531   23621 system_pods.go:86] 24 kube-system pods found
	I1007 10:49:24.311559   23621 system_pods.go:89] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:49:24.311565   23621 system_pods.go:89] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:49:24.311569   23621 system_pods.go:89] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:49:24.311575   23621 system_pods.go:89] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:49:24.311579   23621 system_pods.go:89] "etcd-ha-406505-m03" [2c0079fb-51f1-423c-8b4c-893824342cd6] Running
	I1007 10:49:24.311583   23621 system_pods.go:89] "kindnet-28vpp" [c14e8bdf-ebc5-4349-adb4-6786cd15551d] Running
	I1007 10:49:24.311589   23621 system_pods.go:89] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:49:24.311593   23621 system_pods.go:89] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:49:24.311599   23621 system_pods.go:89] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:49:24.311602   23621 system_pods.go:89] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:49:24.311606   23621 system_pods.go:89] "kube-apiserver-ha-406505-m03" [8bc80684-cd9a-40b1-94e1-02cb77917c36] Running
	I1007 10:49:24.311611   23621 system_pods.go:89] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:49:24.311617   23621 system_pods.go:89] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:49:24.311620   23621 system_pods.go:89] "kube-controller-manager-ha-406505-m03" [ab97ec1a-fb7e-42a5-b77c-721ccf85db1d] Running
	I1007 10:49:24.311626   23621 system_pods.go:89] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:49:24.311629   23621 system_pods.go:89] "kube-proxy-c79zf" [2b12aaa5-9560-459b-a3bb-e45e73a6b663] Running
	I1007 10:49:24.311635   23621 system_pods.go:89] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:49:24.311638   23621 system_pods.go:89] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:49:24.311643   23621 system_pods.go:89] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:49:24.311646   23621 system_pods.go:89] "kube-scheduler-ha-406505-m03" [da8d486f-250a-4961-ac7c-b1435c52a3ca] Running
	I1007 10:49:24.311649   23621 system_pods.go:89] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:49:24.311652   23621 system_pods.go:89] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:49:24.311655   23621 system_pods.go:89] "kube-vip-ha-406505-m03" [a90a6084-73a3-476c-9729-1d8b45c6f3fc] Running
	I1007 10:49:24.311658   23621 system_pods.go:89] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:49:24.311664   23621 system_pods.go:126] duration metric: took 207.442478ms to wait for k8s-apps to be running ...
	I1007 10:49:24.311673   23621 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 10:49:24.311718   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:49:24.329372   23621 system_svc.go:56] duration metric: took 17.689597ms WaitForService to wait for kubelet
	I1007 10:49:24.329408   23621 kubeadm.go:582] duration metric: took 23.588563567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:49:24.329431   23621 node_conditions.go:102] verifying NodePressure condition ...
	I1007 10:49:24.498716   23621 request.go:632] Waited for 169.197079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes
	I1007 10:49:24.498772   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes
	I1007 10:49:24.498777   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:24.498785   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:24.498788   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:24.502487   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:24.503651   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:49:24.503669   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:49:24.503680   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:49:24.503684   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:49:24.503688   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:49:24.503691   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:49:24.503697   23621 node_conditions.go:105] duration metric: took 174.259877ms to run NodePressure ...
	I1007 10:49:24.503713   23621 start.go:241] waiting for startup goroutines ...
	I1007 10:49:24.503733   23621 start.go:255] writing updated cluster config ...
	I1007 10:49:24.504082   23621 ssh_runner.go:195] Run: rm -f paused
	I1007 10:49:24.554954   23621 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 10:49:24.557268   23621 out.go:177] * Done! kubectl is now configured to use "ha-406505" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.401707271Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0acf6825-1c1d-401b-bca7-15a48618f811 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.402951883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b3553ad-d165-40d9-9e36-fda6eb392136 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.403360411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298388403340975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b3553ad-d165-40d9-9e36-fda6eb392136 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.404121402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=591b34c9-5f03-4d0e-a1c0-8e309705c0cb name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.404178888Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=591b34c9-5f03-4d0e-a1c0-8e309705c0cb name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.404478840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=591b34c9-5f03-4d0e-a1c0-8e309705c0cb name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.448335717Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f660a22d-d01f-4934-8ed2-1e01eee35caf name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.448663158Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c745bc99-d8f5-4e92-931d-94dc5261ebed name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.448717936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c745bc99-d8f5-4e92-931d-94dc5261ebed name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.449192927Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-tzgjx,Uid:b76f90b1-386b-4eda-966f-2400d6bf4412,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728298167304213439,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T10:49:25.487261096Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:be10b32c-e562-40ef-8b47-04cd1caf9778,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1728298019253313077,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-07T10:46:58.927459721Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-xzc88,Uid:f22736c0-5ca4-4c9b-bcd4-cf95f9390507,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728298019253174253,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T10:46:58.921906033Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-ghmwd,Uid:8d8533b9-192b-49a8-8d17-96ffd98cb729,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1728298019215051273,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-192b-49a8-8d17-96ffd98cb729,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T10:46:58.907951542Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&PodSandboxMetadata{Name:kube-proxy-nlnhf,Uid:053080d5-38da-4108-96aa-f4a8dbe5de91,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728298007038457748,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-10-07T10:46:46.711366491Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&PodSandboxMetadata{Name:kindnet-pt74h,Uid:bb72605c-a772-4b04-a14d-02efe957c9d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728298007036300361,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T10:46:46.719306759Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-406505,Uid:10aaa3e84694103c024dc95a3ae5c57f,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1728297996043896138,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10aaa3e84694103c024dc95a3ae5c57f,kubernetes.io/config.seen: 2024-10-07T10:46:35.558262766Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-406505,Uid:58e0002ddfebe157cb7f0f09bdb94c3e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728297996037338237,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,tier: control-plane,},Ann
otations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.250:8443,kubernetes.io/config.hash: 58e0002ddfebe157cb7f0f09bdb94c3e,kubernetes.io/config.seen: 2024-10-07T10:46:35.558260431Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-406505,Uid:01277ab648416b0c5ac093cf7ea4b7be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728297996033331041,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 01277ab648416b0c5ac093cf7ea4b7be,kubernetes.io/config.seen: 2024-10-07T10:46:35.558261558Z,kubernetes.io/config.source: file,},RuntimeHandler:,
},&PodSandbox{Id:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-406505,Uid:7bdcf35327874f36021578ca054760a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728297996023356334,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{kubernetes.io/config.hash: 7bdcf35327874f36021578ca054760a4,kubernetes.io/config.seen: 2024-10-07T10:46:35.558263881Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&PodSandboxMetadata{Name:etcd-ha-406505,Uid:572e44bb4eeb4579e4fb7c299dd7cd5c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728297996009026893,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-406505,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.250:2379,kubernetes.io/config.hash: 572e44bb4eeb4579e4fb7c299dd7cd5c,kubernetes.io/config.seen: 2024-10-07T10:46:35.558256702Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f660a22d-d01f-4934-8ed2-1e01eee35caf name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.449967909Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae2ce361-450c-43d3-b2be-a129baaae3ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.450019059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae2ce361-450c-43d3-b2be-a129baaae3ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.450228805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae2ce361-450c-43d3-b2be-a129baaae3ae name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.453257208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eeb37b93-3c74-4e49-8149-496bcf379f2d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.453838678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298388453816131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eeb37b93-3c74-4e49-8149-496bcf379f2d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.454492204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf091007-4240-4618-8043-a4d32750c275 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.454572858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf091007-4240-4618-8043-a4d32750c275 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.454822876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf091007-4240-4618-8043-a4d32750c275 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.500641937Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d3f3601-0312-4c37-94d5-1ed6200c0b27 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.500729936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d3f3601-0312-4c37-94d5-1ed6200c0b27 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.502470849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4afa033-5bee-4284-a978-507df474981c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.502909532Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298388502882785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4afa033-5bee-4284-a978-507df474981c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.503667827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8023150-e009-48e7-a191-03eb1c8ad51c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.503725111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8023150-e009-48e7-a191-03eb1c8ad51c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:08 ha-406505 crio[660]: time="2024-10-07 10:53:08.503992720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8023150-e009-48e7-a191-03eb1c8ad51c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4d9a2a1043aa2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   77c3242ae96e0       busybox-7dff88458-tzgjx
	77cd2f018baff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ce1fc89e90c8e       storage-provisioner
	b0cc4a36e486c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   32fee1b9f25d3       coredns-7c65d6cfc9-xzc88
	0ebc4ee6afc90       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   6142c38866566       coredns-7c65d6cfc9-ghmwd
	4abb8ea931227       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   33e535c0eb67f       kindnet-pt74h
	99b7425285dcb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   f6d2bf974f666       kube-proxy-nlnhf
	79eb2653667b5       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   faf0d86acd1e3       kube-vip-ha-406505
	fa4965d1b169f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   77c273367dc31       kube-scheduler-ha-406505
	5b63558545dbd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   de56de352fe21       kube-apiserver-ha-406505
	11a16a81bf6bf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   b351c9fd7630d       etcd-ha-406505
	eb0b61d1fd920       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   c4fb1e79d2379       kube-controller-manager-ha-406505
	
	
	==> coredns [0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136] <==
	[INFO] 10.244.1.2:52141 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000229841s
	[INFO] 10.244.1.2:49387 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177541s
	[INFO] 10.244.1.2:51777 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003610459s
	[INFO] 10.244.1.2:53883 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000188749s
	[INFO] 10.244.2.2:56490 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126634s
	[INFO] 10.244.2.2:39507 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008519s
	[INFO] 10.244.2.2:51465 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085975s
	[INFO] 10.244.2.2:54662 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141674s
	[INFO] 10.244.0.4:60148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114521s
	[INFO] 10.244.0.4:60136 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061595s
	[INFO] 10.244.0.4:58172 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000046455s
	[INFO] 10.244.0.4:37188 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001182047s
	[INFO] 10.244.0.4:43590 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115472s
	[INFO] 10.244.0.4:58012 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033373s
	[INFO] 10.244.1.2:49885 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158136s
	[INFO] 10.244.1.2:37058 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108137s
	[INFO] 10.244.1.2:53254 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014209s
	[INFO] 10.244.2.2:48605 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000226971s
	[INFO] 10.244.0.4:56354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139347s
	[INFO] 10.244.0.4:53408 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091527s
	[INFO] 10.244.1.2:56944 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148755s
	[INFO] 10.244.1.2:35017 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000240968s
	[INFO] 10.244.1.2:60956 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156011s
	[INFO] 10.244.2.2:52452 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151278s
	[INFO] 10.244.0.4:37523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081767s
	
	
	==> coredns [b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12] <==
	[INFO] 10.244.2.2:48222 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000340345s
	[INFO] 10.244.2.2:43370 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001307969s
	[INFO] 10.244.0.4:43661 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000100802s
	[INFO] 10.244.0.4:58476 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001778301s
	[INFO] 10.244.1.2:33672 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201181s
	[INFO] 10.244.1.2:45107 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000305371s
	[INFO] 10.244.2.2:49200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000294988s
	[INFO] 10.244.2.2:49393 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001850366s
	[INFO] 10.244.2.2:48213 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001471137s
	[INFO] 10.244.2.2:60468 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152254s
	[INFO] 10.244.0.4:59551 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001687745s
	[INFO] 10.244.0.4:49859 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044844s
	[INFO] 10.244.1.2:53294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000358207s
	[INFO] 10.244.2.2:48456 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119873s
	[INFO] 10.244.2.2:52623 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000223935s
	[INFO] 10.244.2.2:35737 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161301s
	[INFO] 10.244.0.4:48948 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099818s
	[INFO] 10.244.0.4:38842 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000194312s
	[INFO] 10.244.1.2:52889 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000213247s
	[INFO] 10.244.2.2:54256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000280783s
	[INFO] 10.244.2.2:50232 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000318899s
	[INFO] 10.244.2.2:39214 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147924s
	[INFO] 10.244.0.4:53521 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112358s
	[INFO] 10.244.0.4:49217 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161935s
	[INFO] 10.244.0.4:32867 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109582s
	
	
	==> describe nodes <==
	Name:               ha-406505
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T10_46_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:46:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:52:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-406505
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f87dab03082f46978f270a1d9209ed7f
	  System UUID:                f87dab03-082f-4697-8f27-0a1d9209ed7f
	  Boot ID:                    c90db251-8dbe-47f3-98dd-72c0b5cbd489
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tzgjx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 coredns-7c65d6cfc9-ghmwd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 coredns-7c65d6cfc9-xzc88             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 etcd-ha-406505                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m26s
	  kube-system                 kindnet-pt74h                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m22s
	  kube-system                 kube-apiserver-ha-406505             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-controller-manager-ha-406505    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-proxy-nlnhf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-scheduler-ha-406505             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-vip-ha-406505                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m20s  kube-proxy       
	  Normal  Starting                 6m26s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m26s  kubelet          Node ha-406505 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s  kubelet          Node ha-406505 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s  kubelet          Node ha-406505 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m22s  node-controller  Node ha-406505 event: Registered Node ha-406505 in Controller
	  Normal  NodeReady                6m10s  kubelet          Node ha-406505 status is now: NodeReady
	  Normal  RegisteredNode           5m22s  node-controller  Node ha-406505 event: Registered Node ha-406505 in Controller
	  Normal  RegisteredNode           4m3s   node-controller  Node ha-406505 event: Registered Node ha-406505 in Controller
	
	
	Name:               ha-406505-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_47_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:47:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:50:41 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.37
	  Hostname:    ha-406505-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad0b7870a2a54204abf112edd9c072ce
	  System UUID:                ad0b7870-a2a5-4204-abf1-12edd9c072ce
	  Boot ID:                    0b4627e5-d7a2-40a3-9d63-8cae53190740
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bjz2q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-406505-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m28s
	  kube-system                 kindnet-h8fh4                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m30s
	  kube-system                 kube-apiserver-ha-406505-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-controller-manager-ha-406505-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-proxy-6ng4z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-scheduler-ha-406505-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-vip-ha-406505-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m30s (x8 over 5m30s)  kubelet          Node ha-406505-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x8 over 5m30s)  kubelet          Node ha-406505-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x7 over 5m30s)  kubelet          Node ha-406505-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-406505-m02 event: Registered Node ha-406505-m02 in Controller
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-406505-m02 event: Registered Node ha-406505-m02 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-406505-m02 event: Registered Node ha-406505-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-406505-m02 status is now: NodeNotReady
	
	
	Name:               ha-406505-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_49_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:48:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:53:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:48:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:48:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:48:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:49:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-406505-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75575a7b8eb34e0589ff800419073c6f
	  System UUID:                75575a7b-8eb3-4e05-89ff-800419073c6f
	  Boot ID:                    797c7f20-765b-4e29-a483-d65c033a2625
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ktkg9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-406505-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m10s
	  kube-system                 kindnet-28vpp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m12s
	  kube-system                 kube-apiserver-ha-406505-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-controller-manager-ha-406505-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-c79zf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-scheduler-ha-406505-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-vip-ha-406505-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-406505-m03 event: Registered Node ha-406505-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node ha-406505-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node ha-406505-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node ha-406505-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-406505-m03 event: Registered Node ha-406505-m03 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-406505-m03 event: Registered Node ha-406505-m03 in Controller
	
	
	Name:               ha-406505-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_50_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:50:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:52:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-406505-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eb4bdac85cb424a99b5076fbfc659b6
	  System UUID:                9eb4bdac-85cb-424a-99b5-076fbfc659b6
	  Boot ID:                    6e48a403-8d50-4a51-beab-d3d8e1e29c60
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cqsll       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m4s
	  kube-system                 kube-proxy-8n5g6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-406505-m04 event: Registered Node ha-406505-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)  kubelet          Node ha-406505-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)  kubelet          Node ha-406505-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)  kubelet          Node ha-406505-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-406505-m04 event: Registered Node ha-406505-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-406505-m04 event: Registered Node ha-406505-m04 in Controller
	  Normal  NodeReady                2m43s                kubelet          Node ha-406505-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 10:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051371] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040405] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.858113] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.711350] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.602582] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.722628] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.057663] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056433] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.169114] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.137291] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.300660] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.116084] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.680655] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.069150] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.087227] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.089104] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.196698] kauditd_printk_skb: 31 callbacks suppressed
	[ +11.900338] kauditd_printk_skb: 28 callbacks suppressed
	[Oct 7 10:47] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b] <==
	{"level":"warn","ts":"2024-10-07T10:53:08.836757Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.837052Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.843687Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.849622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.852238Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.853886Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.862972Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.879708Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.880906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.895745Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.901625Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.906698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.917257Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.926650Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.934071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.937253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.940015Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.947274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.951511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:08.955776Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:09.015245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:09.027105Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:09.040597Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:09.052071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:09.086484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:53:09 up 7 min,  0 users,  load average: 1.08, 0.64, 0.29
	Linux ha-406505 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec] <==
	I1007 10:52:38.834704       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	I1007 10:52:48.824984       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I1007 10:52:48.825121       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	I1007 10:52:48.825376       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I1007 10:52:48.825541       1 main.go:299] handling current node
	I1007 10:52:48.825621       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I1007 10:52:48.825668       1 main.go:322] Node ha-406505-m02 has CIDR [10.244.1.0/24] 
	I1007 10:52:48.825793       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I1007 10:52:48.825838       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:52:58.833626       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I1007 10:52:58.833675       1 main.go:299] handling current node
	I1007 10:52:58.833690       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I1007 10:52:58.833695       1 main.go:322] Node ha-406505-m02 has CIDR [10.244.1.0/24] 
	I1007 10:52:58.833864       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I1007 10:52:58.833902       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:52:58.833984       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I1007 10:52:58.834007       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	I1007 10:53:08.831971       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I1007 10:53:08.832046       1 main.go:322] Node ha-406505-m02 has CIDR [10.244.1.0/24] 
	I1007 10:53:08.832167       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I1007 10:53:08.832188       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:53:08.832260       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I1007 10:53:08.832280       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	I1007 10:53:08.832356       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I1007 10:53:08.832375       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46] <==
	W1007 10:46:41.183638       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.250]
	I1007 10:46:41.185270       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 10:46:41.191014       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 10:46:41.276253       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1007 10:46:42.491094       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1007 10:46:42.518362       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1007 10:46:42.533655       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 10:46:46.678876       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1007 10:46:46.902258       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1007 10:49:31.707971       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59314: use of closed network connection
	E1007 10:49:31.903823       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59340: use of closed network connection
	E1007 10:49:32.086294       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59358: use of closed network connection
	E1007 10:49:32.297595       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59380: use of closed network connection
	E1007 10:49:32.498258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59404: use of closed network connection
	E1007 10:49:32.676693       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59420: use of closed network connection
	E1007 10:49:32.859242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59440: use of closed network connection
	E1007 10:49:33.057965       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59468: use of closed network connection
	E1007 10:49:33.240103       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59478: use of closed network connection
	E1007 10:49:33.559788       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59494: use of closed network connection
	E1007 10:49:33.755853       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59504: use of closed network connection
	E1007 10:49:33.944169       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59516: use of closed network connection
	E1007 10:49:34.136074       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59544: use of closed network connection
	E1007 10:49:34.332211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59568: use of closed network connection
	E1007 10:49:34.527795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59588: use of closed network connection
	W1007 10:51:01.196929       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.250]
	
	
	==> kube-controller-manager [eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750] <==
	I1007 10:50:05.605601       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406505-m04\" does not exist"
	I1007 10:50:05.651707       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406505-m04" podCIDRs=["10.244.3.0/24"]
	I1007 10:50:05.651878       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:05.652095       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:05.866588       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.004135       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.156174       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.156822       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406505-m04"
	I1007 10:50:06.254557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.312035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.987679       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:07.073914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:15.971952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:26.980381       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406505-m04"
	I1007 10:50:26.982232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:27.002591       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:27.205853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:36.177995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:51:25.956486       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	I1007 10:51:25.956910       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406505-m04"
	I1007 10:51:25.977091       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	I1007 10:51:26.074899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.887988ms"
	I1007 10:51:26.075025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.368µs"
	I1007 10:51:26.200250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	I1007 10:51:31.167674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	
	
	==> kube-proxy [99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 10:46:47.887571       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 10:46:47.911134       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.250"]
	E1007 10:46:47.911278       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 10:46:47.980015       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 10:46:47.980045       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 10:46:47.980074       1 server_linux.go:169] "Using iptables Proxier"
	I1007 10:46:47.983497       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 10:46:47.984580       1 server.go:483] "Version info" version="v1.31.1"
	I1007 10:46:47.984594       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 10:46:47.987677       1 config.go:199] "Starting service config controller"
	I1007 10:46:47.988455       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 10:46:47.988871       1 config.go:105] "Starting endpoint slice config controller"
	I1007 10:46:47.988960       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 10:46:47.990124       1 config.go:328] "Starting node config controller"
	I1007 10:46:47.990263       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 10:46:48.088926       1 shared_informer.go:320] Caches are synced for service config
	I1007 10:46:48.090118       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 10:46:48.090928       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887] <==
	W1007 10:46:40.575139       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 10:46:40.575275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.704893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 10:46:40.704946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.706026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 10:46:40.706071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.735457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 10:46:40.735594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.745564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 10:46:40.745701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.956352       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 10:46:40.956445       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 10:46:43.102324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1007 10:50:05.717930       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cqsll\": pod kindnet-cqsll is already assigned to node \"ha-406505-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-cqsll" node="ha-406505-m04"
	E1007 10:50:05.719300       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 62093c84-d91b-44ed-a605-198bd057ee89(kube-system/kindnet-cqsll) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-cqsll"
	E1007 10:50:05.719513       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cqsll\": pod kindnet-cqsll is already assigned to node \"ha-406505-m04\"" pod="kube-system/kindnet-cqsll"
	I1007 10:50:05.719601       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cqsll" node="ha-406505-m04"
	E1007 10:50:05.720316       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8n5g6\": pod kube-proxy-8n5g6 is already assigned to node \"ha-406505-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8n5g6" node="ha-406505-m04"
	E1007 10:50:05.724984       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod df46b5c0-261e-4455-bda8-d73ef0b24faa(kube-system/kube-proxy-8n5g6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8n5g6"
	E1007 10:50:05.725159       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8n5g6\": pod kube-proxy-8n5g6 is already assigned to node \"ha-406505-m04\"" pod="kube-system/kube-proxy-8n5g6"
	I1007 10:50:05.725258       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8n5g6" node="ha-406505-m04"
	E1007 10:50:05.734867       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-957n4\": pod kindnet-957n4 is already assigned to node \"ha-406505-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-957n4" node="ha-406505-m04"
	E1007 10:50:05.736396       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9b6e172b-6f7a-48e1-8a89-60f70e5b77f6(kube-system/kindnet-957n4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-957n4"
	E1007 10:50:05.736761       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-957n4\": pod kindnet-957n4 is already assigned to node \"ha-406505-m04\"" pod="kube-system/kindnet-957n4"
	I1007 10:50:05.736855       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-957n4" node="ha-406505-m04"
	
	
	==> kubelet <==
	Oct 07 10:51:42 ha-406505 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 10:51:42 ha-406505 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 10:51:42 ha-406505 kubelet[1306]: E1007 10:51:42.610847    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298302610335333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:42 ha-406505 kubelet[1306]: E1007 10:51:42.610884    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298302610335333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:52 ha-406505 kubelet[1306]: E1007 10:51:52.612666    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298312612090878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:52 ha-406505 kubelet[1306]: E1007 10:51:52.612749    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298312612090878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:02 ha-406505 kubelet[1306]: E1007 10:52:02.614917    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298322614471502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:02 ha-406505 kubelet[1306]: E1007 10:52:02.615287    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298322614471502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:12 ha-406505 kubelet[1306]: E1007 10:52:12.617387    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298332617012708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:12 ha-406505 kubelet[1306]: E1007 10:52:12.617780    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298332617012708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:22 ha-406505 kubelet[1306]: E1007 10:52:22.620172    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298342619770777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:22 ha-406505 kubelet[1306]: E1007 10:52:22.620593    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298342619770777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:32 ha-406505 kubelet[1306]: E1007 10:52:32.622744    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298352622225858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:32 ha-406505 kubelet[1306]: E1007 10:52:32.622792    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298352622225858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:42 ha-406505 kubelet[1306]: E1007 10:52:42.472254    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 10:52:42 ha-406505 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 10:52:42 ha-406505 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 10:52:42 ha-406505 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 10:52:42 ha-406505 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 10:52:42 ha-406505 kubelet[1306]: E1007 10:52:42.624989    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298362624467928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:42 ha-406505 kubelet[1306]: E1007 10:52:42.625274    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298362624467928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:52 ha-406505 kubelet[1306]: E1007 10:52:52.627616    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298372626959180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:52 ha-406505 kubelet[1306]: E1007 10:52:52.627689    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298372626959180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:02 ha-406505 kubelet[1306]: E1007 10:53:02.630238    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298382629746151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:02 ha-406505 kubelet[1306]: E1007 10:53:02.630676    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298382629746151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406505 -n ha-406505
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406505 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.426334747s)
ha_test.go:415: expected profile "ha-406505" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-406505\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-406505\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-406505\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.250\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.37\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.102\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.2\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"
metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\"
:262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406505 -n ha-406505
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406505 logs -n 25: (1.484601138s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505:/home/docker/cp-test_ha-406505-m03_ha-406505.txt                       |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505 sudo cat                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505.txt                                 |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m04 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp testdata/cp-test.txt                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505:/home/docker/cp-test_ha-406505-m04_ha-406505.txt                       |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505 sudo cat                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505.txt                                 |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03:/home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m03 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-406505 node stop m02 -v=7                                                     | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:46:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:46:00.685163   23621 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:46:00.685349   23621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:46:00.685361   23621 out.go:358] Setting ErrFile to fd 2...
	I1007 10:46:00.685369   23621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:46:00.685896   23621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:46:00.686526   23621 out.go:352] Setting JSON to false
	I1007 10:46:00.687357   23621 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1655,"bootTime":1728296306,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:46:00.687449   23621 start.go:139] virtualization: kvm guest
	I1007 10:46:00.689739   23621 out.go:177] * [ha-406505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:46:00.691129   23621 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:46:00.691156   23621 notify.go:220] Checking for updates...
	I1007 10:46:00.693697   23621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:46:00.695072   23621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:46:00.696501   23621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:00.697726   23621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:46:00.698926   23621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:46:00.700212   23621 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:46:00.737316   23621 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 10:46:00.738839   23621 start.go:297] selected driver: kvm2
	I1007 10:46:00.738857   23621 start.go:901] validating driver "kvm2" against <nil>
	I1007 10:46:00.738870   23621 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:46:00.739587   23621 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:46:00.739673   23621 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:46:00.755165   23621 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:46:00.755211   23621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 10:46:00.755442   23621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:46:00.755469   23621 cni.go:84] Creating CNI manager for ""
	I1007 10:46:00.755509   23621 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 10:46:00.755520   23621 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 10:46:00.755574   23621 start.go:340] cluster config:
	{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1007 10:46:00.755686   23621 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:46:00.757513   23621 out.go:177] * Starting "ha-406505" primary control-plane node in "ha-406505" cluster
	I1007 10:46:00.758765   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:46:00.758805   23621 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 10:46:00.758823   23621 cache.go:56] Caching tarball of preloaded images
	I1007 10:46:00.758896   23621 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:46:00.758906   23621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:46:00.759224   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:00.759245   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json: {Name:mk9b03e101af877bc71d822d951dd0373d9dda34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:00.759379   23621 start.go:360] acquireMachinesLock for ha-406505: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:46:00.759405   23621 start.go:364] duration metric: took 14.549µs to acquireMachinesLock for "ha-406505"
	I1007 10:46:00.759421   23621 start.go:93] Provisioning new machine with config: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:46:00.759479   23621 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 10:46:00.761273   23621 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 10:46:00.761420   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:00.761466   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:00.775977   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35573
	I1007 10:46:00.776393   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:00.776945   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:00.776968   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:00.777275   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:00.777446   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:00.777589   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:00.777737   23621 start.go:159] libmachine.API.Create for "ha-406505" (driver="kvm2")
	I1007 10:46:00.777767   23621 client.go:168] LocalClient.Create starting
	I1007 10:46:00.777806   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:46:00.777846   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:00.777867   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:00.777925   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:46:00.777949   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:00.777966   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:00.777989   23621 main.go:141] libmachine: Running pre-create checks...
	I1007 10:46:00.778000   23621 main.go:141] libmachine: (ha-406505) Calling .PreCreateCheck
	I1007 10:46:00.778317   23621 main.go:141] libmachine: (ha-406505) Calling .GetConfigRaw
	I1007 10:46:00.778644   23621 main.go:141] libmachine: Creating machine...
	I1007 10:46:00.778656   23621 main.go:141] libmachine: (ha-406505) Calling .Create
	I1007 10:46:00.778771   23621 main.go:141] libmachine: (ha-406505) Creating KVM machine...
	I1007 10:46:00.779972   23621 main.go:141] libmachine: (ha-406505) DBG | found existing default KVM network
	I1007 10:46:00.780650   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:00.780522   23644 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a50}
	I1007 10:46:00.780693   23621 main.go:141] libmachine: (ha-406505) DBG | created network xml: 
	I1007 10:46:00.780713   23621 main.go:141] libmachine: (ha-406505) DBG | <network>
	I1007 10:46:00.780722   23621 main.go:141] libmachine: (ha-406505) DBG |   <name>mk-ha-406505</name>
	I1007 10:46:00.780732   23621 main.go:141] libmachine: (ha-406505) DBG |   <dns enable='no'/>
	I1007 10:46:00.780741   23621 main.go:141] libmachine: (ha-406505) DBG |   
	I1007 10:46:00.780752   23621 main.go:141] libmachine: (ha-406505) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 10:46:00.780763   23621 main.go:141] libmachine: (ha-406505) DBG |     <dhcp>
	I1007 10:46:00.780774   23621 main.go:141] libmachine: (ha-406505) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 10:46:00.780793   23621 main.go:141] libmachine: (ha-406505) DBG |     </dhcp>
	I1007 10:46:00.780806   23621 main.go:141] libmachine: (ha-406505) DBG |   </ip>
	I1007 10:46:00.780813   23621 main.go:141] libmachine: (ha-406505) DBG |   
	I1007 10:46:00.780820   23621 main.go:141] libmachine: (ha-406505) DBG | </network>
	I1007 10:46:00.780827   23621 main.go:141] libmachine: (ha-406505) DBG | 
	I1007 10:46:00.785975   23621 main.go:141] libmachine: (ha-406505) DBG | trying to create private KVM network mk-ha-406505 192.168.39.0/24...
	I1007 10:46:00.849882   23621 main.go:141] libmachine: (ha-406505) DBG | private KVM network mk-ha-406505 192.168.39.0/24 created
	I1007 10:46:00.849911   23621 main.go:141] libmachine: (ha-406505) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505 ...
	I1007 10:46:00.849973   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:00.849860   23644 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:00.850002   23621 main.go:141] libmachine: (ha-406505) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:46:00.850027   23621 main.go:141] libmachine: (ha-406505) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:46:01.096727   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:01.096588   23644 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa...
	I1007 10:46:01.205683   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:01.205510   23644 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/ha-406505.rawdisk...
	I1007 10:46:01.205717   23621 main.go:141] libmachine: (ha-406505) DBG | Writing magic tar header
	I1007 10:46:01.205736   23621 main.go:141] libmachine: (ha-406505) DBG | Writing SSH key tar header
	I1007 10:46:01.205745   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:01.205639   23644 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505 ...
	I1007 10:46:01.205758   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505
	I1007 10:46:01.205765   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:46:01.205774   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505 (perms=drwx------)
	I1007 10:46:01.205782   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:46:01.205789   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:46:01.205799   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:01.205809   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:46:01.205820   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:46:01.205825   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:46:01.205832   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:46:01.205838   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home
	I1007 10:46:01.205845   23621 main.go:141] libmachine: (ha-406505) DBG | Skipping /home - not owner
	I1007 10:46:01.205854   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:46:01.205860   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:46:01.205868   23621 main.go:141] libmachine: (ha-406505) Creating domain...
	I1007 10:46:01.207028   23621 main.go:141] libmachine: (ha-406505) define libvirt domain using xml: 
	I1007 10:46:01.207069   23621 main.go:141] libmachine: (ha-406505) <domain type='kvm'>
	I1007 10:46:01.207077   23621 main.go:141] libmachine: (ha-406505)   <name>ha-406505</name>
	I1007 10:46:01.207082   23621 main.go:141] libmachine: (ha-406505)   <memory unit='MiB'>2200</memory>
	I1007 10:46:01.207087   23621 main.go:141] libmachine: (ha-406505)   <vcpu>2</vcpu>
	I1007 10:46:01.207093   23621 main.go:141] libmachine: (ha-406505)   <features>
	I1007 10:46:01.207097   23621 main.go:141] libmachine: (ha-406505)     <acpi/>
	I1007 10:46:01.207103   23621 main.go:141] libmachine: (ha-406505)     <apic/>
	I1007 10:46:01.207108   23621 main.go:141] libmachine: (ha-406505)     <pae/>
	I1007 10:46:01.207115   23621 main.go:141] libmachine: (ha-406505)     
	I1007 10:46:01.207120   23621 main.go:141] libmachine: (ha-406505)   </features>
	I1007 10:46:01.207124   23621 main.go:141] libmachine: (ha-406505)   <cpu mode='host-passthrough'>
	I1007 10:46:01.207129   23621 main.go:141] libmachine: (ha-406505)   
	I1007 10:46:01.207133   23621 main.go:141] libmachine: (ha-406505)   </cpu>
	I1007 10:46:01.207137   23621 main.go:141] libmachine: (ha-406505)   <os>
	I1007 10:46:01.207141   23621 main.go:141] libmachine: (ha-406505)     <type>hvm</type>
	I1007 10:46:01.207145   23621 main.go:141] libmachine: (ha-406505)     <boot dev='cdrom'/>
	I1007 10:46:01.207150   23621 main.go:141] libmachine: (ha-406505)     <boot dev='hd'/>
	I1007 10:46:01.207154   23621 main.go:141] libmachine: (ha-406505)     <bootmenu enable='no'/>
	I1007 10:46:01.207161   23621 main.go:141] libmachine: (ha-406505)   </os>
	I1007 10:46:01.207186   23621 main.go:141] libmachine: (ha-406505)   <devices>
	I1007 10:46:01.207206   23621 main.go:141] libmachine: (ha-406505)     <disk type='file' device='cdrom'>
	I1007 10:46:01.207220   23621 main.go:141] libmachine: (ha-406505)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/boot2docker.iso'/>
	I1007 10:46:01.207236   23621 main.go:141] libmachine: (ha-406505)       <target dev='hdc' bus='scsi'/>
	I1007 10:46:01.207250   23621 main.go:141] libmachine: (ha-406505)       <readonly/>
	I1007 10:46:01.207259   23621 main.go:141] libmachine: (ha-406505)     </disk>
	I1007 10:46:01.207281   23621 main.go:141] libmachine: (ha-406505)     <disk type='file' device='disk'>
	I1007 10:46:01.207300   23621 main.go:141] libmachine: (ha-406505)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:46:01.207324   23621 main.go:141] libmachine: (ha-406505)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/ha-406505.rawdisk'/>
	I1007 10:46:01.207335   23621 main.go:141] libmachine: (ha-406505)       <target dev='hda' bus='virtio'/>
	I1007 10:46:01.207342   23621 main.go:141] libmachine: (ha-406505)     </disk>
	I1007 10:46:01.207348   23621 main.go:141] libmachine: (ha-406505)     <interface type='network'>
	I1007 10:46:01.207354   23621 main.go:141] libmachine: (ha-406505)       <source network='mk-ha-406505'/>
	I1007 10:46:01.207361   23621 main.go:141] libmachine: (ha-406505)       <model type='virtio'/>
	I1007 10:46:01.207369   23621 main.go:141] libmachine: (ha-406505)     </interface>
	I1007 10:46:01.207381   23621 main.go:141] libmachine: (ha-406505)     <interface type='network'>
	I1007 10:46:01.207395   23621 main.go:141] libmachine: (ha-406505)       <source network='default'/>
	I1007 10:46:01.207406   23621 main.go:141] libmachine: (ha-406505)       <model type='virtio'/>
	I1007 10:46:01.207415   23621 main.go:141] libmachine: (ha-406505)     </interface>
	I1007 10:46:01.207422   23621 main.go:141] libmachine: (ha-406505)     <serial type='pty'>
	I1007 10:46:01.207432   23621 main.go:141] libmachine: (ha-406505)       <target port='0'/>
	I1007 10:46:01.207442   23621 main.go:141] libmachine: (ha-406505)     </serial>
	I1007 10:46:01.207469   23621 main.go:141] libmachine: (ha-406505)     <console type='pty'>
	I1007 10:46:01.207491   23621 main.go:141] libmachine: (ha-406505)       <target type='serial' port='0'/>
	I1007 10:46:01.207513   23621 main.go:141] libmachine: (ha-406505)     </console>
	I1007 10:46:01.207526   23621 main.go:141] libmachine: (ha-406505)     <rng model='virtio'>
	I1007 10:46:01.207539   23621 main.go:141] libmachine: (ha-406505)       <backend model='random'>/dev/random</backend>
	I1007 10:46:01.207548   23621 main.go:141] libmachine: (ha-406505)     </rng>
	I1007 10:46:01.207554   23621 main.go:141] libmachine: (ha-406505)     
	I1007 10:46:01.207563   23621 main.go:141] libmachine: (ha-406505)     
	I1007 10:46:01.207572   23621 main.go:141] libmachine: (ha-406505)   </devices>
	I1007 10:46:01.207587   23621 main.go:141] libmachine: (ha-406505) </domain>
	I1007 10:46:01.207603   23621 main.go:141] libmachine: (ha-406505) 
	I1007 10:46:01.211673   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:76:8f:a7 in network default
	I1007 10:46:01.212309   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:01.212331   23621 main.go:141] libmachine: (ha-406505) Ensuring networks are active...
	I1007 10:46:01.212999   23621 main.go:141] libmachine: (ha-406505) Ensuring network default is active
	I1007 10:46:01.213295   23621 main.go:141] libmachine: (ha-406505) Ensuring network mk-ha-406505 is active
	I1007 10:46:01.213746   23621 main.go:141] libmachine: (ha-406505) Getting domain xml...
	I1007 10:46:01.214325   23621 main.go:141] libmachine: (ha-406505) Creating domain...
	I1007 10:46:02.421940   23621 main.go:141] libmachine: (ha-406505) Waiting to get IP...
	I1007 10:46:02.422559   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:02.422963   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:02.423013   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:02.422950   23644 retry.go:31] will retry after 195.328474ms: waiting for machine to come up
	I1007 10:46:02.620556   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:02.621120   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:02.621158   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:02.621075   23644 retry.go:31] will retry after 387.449002ms: waiting for machine to come up
	I1007 10:46:03.009575   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:03.010111   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:03.010135   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:03.010073   23644 retry.go:31] will retry after 404.721004ms: waiting for machine to come up
	I1007 10:46:03.416746   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:03.417186   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:03.417213   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:03.417138   23644 retry.go:31] will retry after 372.059443ms: waiting for machine to come up
	I1007 10:46:03.790603   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:03.791114   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:03.791143   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:03.791071   23644 retry.go:31] will retry after 494.767467ms: waiting for machine to come up
	I1007 10:46:04.287716   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:04.288192   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:04.288211   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:04.288147   23644 retry.go:31] will retry after 903.556325ms: waiting for machine to come up
	I1007 10:46:05.193010   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:05.193532   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:05.193599   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:05.193453   23644 retry.go:31] will retry after 1.025768675s: waiting for machine to come up
	I1007 10:46:06.220323   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:06.220836   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:06.220866   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:06.220776   23644 retry.go:31] will retry after 1.100294717s: waiting for machine to come up
	I1007 10:46:07.323044   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:07.323554   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:07.323582   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:07.323505   23644 retry.go:31] will retry after 1.146070621s: waiting for machine to come up
	I1007 10:46:08.470888   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:08.471336   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:08.471361   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:08.471279   23644 retry.go:31] will retry after 2.296444266s: waiting for machine to come up
	I1007 10:46:10.768938   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:10.769285   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:10.769343   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:10.769271   23644 retry.go:31] will retry after 2.239094894s: waiting for machine to come up
	I1007 10:46:13.010328   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:13.010763   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:13.010789   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:13.010721   23644 retry.go:31] will retry after 3.13857084s: waiting for machine to come up
	I1007 10:46:16.150462   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:16.150858   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:16.150885   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:16.150808   23644 retry.go:31] will retry after 3.125257266s: waiting for machine to come up
	I1007 10:46:19.280079   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:19.280531   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:19.280561   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:19.280474   23644 retry.go:31] will retry after 5.119838312s: waiting for machine to come up
	I1007 10:46:24.405645   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.406055   23621 main.go:141] libmachine: (ha-406505) Found IP for machine: 192.168.39.250
	I1007 10:46:24.406093   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has current primary IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.406101   23621 main.go:141] libmachine: (ha-406505) Reserving static IP address...
	I1007 10:46:24.406506   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find host DHCP lease matching {name: "ha-406505", mac: "52:54:00:1d:e2:d7", ip: "192.168.39.250"} in network mk-ha-406505
	I1007 10:46:24.482533   23621 main.go:141] libmachine: (ha-406505) DBG | Getting to WaitForSSH function...
	I1007 10:46:24.482567   23621 main.go:141] libmachine: (ha-406505) Reserved static IP address: 192.168.39.250
	I1007 10:46:24.482583   23621 main.go:141] libmachine: (ha-406505) Waiting for SSH to be available...
	I1007 10:46:24.485308   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.485711   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.485764   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.485909   23621 main.go:141] libmachine: (ha-406505) DBG | Using SSH client type: external
	I1007 10:46:24.485935   23621 main.go:141] libmachine: (ha-406505) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa (-rw-------)
	I1007 10:46:24.485971   23621 main.go:141] libmachine: (ha-406505) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:46:24.485988   23621 main.go:141] libmachine: (ha-406505) DBG | About to run SSH command:
	I1007 10:46:24.486003   23621 main.go:141] libmachine: (ha-406505) DBG | exit 0
	I1007 10:46:24.612334   23621 main.go:141] libmachine: (ha-406505) DBG | SSH cmd err, output: <nil>: 
	I1007 10:46:24.612631   23621 main.go:141] libmachine: (ha-406505) KVM machine creation complete!
	I1007 10:46:24.613069   23621 main.go:141] libmachine: (ha-406505) Calling .GetConfigRaw
	I1007 10:46:24.613769   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:24.614010   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:24.614210   23621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:46:24.614233   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:24.615544   23621 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:46:24.615563   23621 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:46:24.615570   23621 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:46:24.615577   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.617899   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.618287   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.618310   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.618494   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.618666   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.618809   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.618921   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.619056   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.619306   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.619320   23621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:46:24.727419   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:46:24.727448   23621 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:46:24.727458   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.730240   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.730602   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.730629   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.730740   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.730937   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.731096   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.731252   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.731417   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.731578   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.731587   23621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:46:24.845378   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:46:24.845478   23621 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:46:24.845490   23621 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:46:24.845498   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:24.845780   23621 buildroot.go:166] provisioning hostname "ha-406505"
	I1007 10:46:24.845810   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:24.846017   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.849059   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.849533   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.849565   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.849690   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.849892   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.850056   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.850226   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.850372   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.850530   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.850541   23621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505 && echo "ha-406505" | sudo tee /etc/hostname
	I1007 10:46:24.974484   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:46:24.974507   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.977334   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.977841   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.977876   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.978053   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.978231   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.978390   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.978528   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.978725   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.978910   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.978926   23621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:46:25.097736   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:46:25.097768   23621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:46:25.097810   23621 buildroot.go:174] setting up certificates
	I1007 10:46:25.097819   23621 provision.go:84] configureAuth start
	I1007 10:46:25.097832   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:25.098143   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:25.100773   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.101119   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.101156   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.101261   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.103487   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.103793   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.103821   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.103966   23621 provision.go:143] copyHostCerts
	I1007 10:46:25.104016   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:46:25.104068   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:46:25.104102   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:46:25.104302   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:46:25.104436   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:46:25.104469   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:46:25.104478   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:46:25.104534   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:46:25.104606   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:46:25.104633   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:46:25.104641   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:46:25.104691   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:46:25.104770   23621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505 san=[127.0.0.1 192.168.39.250 ha-406505 localhost minikube]
	I1007 10:46:25.393470   23621 provision.go:177] copyRemoteCerts
	I1007 10:46:25.393548   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:46:25.393578   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.396327   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.396617   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.396642   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.396839   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.397030   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.397230   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.397379   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:25.482559   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:46:25.482632   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1007 10:46:25.508425   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:46:25.508519   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:46:25.534913   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:46:25.534986   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:46:25.560790   23621 provision.go:87] duration metric: took 462.953383ms to configureAuth
	I1007 10:46:25.560817   23621 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:46:25.560982   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:46:25.561053   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.563730   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.564168   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.564201   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.564402   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.564589   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.564760   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.564923   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.565085   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:25.565253   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:25.565272   23621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:46:25.800362   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:46:25.800389   23621 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:46:25.800397   23621 main.go:141] libmachine: (ha-406505) Calling .GetURL
	I1007 10:46:25.801606   23621 main.go:141] libmachine: (ha-406505) DBG | Using libvirt version 6000000
	I1007 10:46:25.803904   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.804248   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.804273   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.804397   23621 main.go:141] libmachine: Docker is up and running!
	I1007 10:46:25.804414   23621 main.go:141] libmachine: Reticulating splines...
	I1007 10:46:25.804421   23621 client.go:171] duration metric: took 25.026640958s to LocalClient.Create
	I1007 10:46:25.804457   23621 start.go:167] duration metric: took 25.026720726s to libmachine.API.Create "ha-406505"
	I1007 10:46:25.804469   23621 start.go:293] postStartSetup for "ha-406505" (driver="kvm2")
	I1007 10:46:25.804483   23621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:46:25.804519   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:25.804801   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:46:25.804822   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.806847   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.807242   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.807267   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.807402   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.807601   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.807734   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.807837   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:25.896212   23621 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:46:25.901311   23621 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:46:25.901340   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:46:25.901403   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:46:25.901507   23621 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:46:25.901521   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:46:25.901647   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:46:25.912163   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:46:25.940558   23621 start.go:296] duration metric: took 136.073342ms for postStartSetup
	I1007 10:46:25.940602   23621 main.go:141] libmachine: (ha-406505) Calling .GetConfigRaw
	I1007 10:46:25.941179   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:25.943928   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.944270   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.944295   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.944594   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:25.944766   23621 start.go:128] duration metric: took 25.185278256s to createHost
	I1007 10:46:25.944788   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.946920   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.947236   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.947263   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.947390   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.947554   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.947698   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.947796   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.947917   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:25.948107   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:25.948122   23621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:46:26.057285   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728297986.034090654
	
	I1007 10:46:26.057320   23621 fix.go:216] guest clock: 1728297986.034090654
	I1007 10:46:26.057332   23621 fix.go:229] Guest: 2024-10-07 10:46:26.034090654 +0000 UTC Remote: 2024-10-07 10:46:25.944777719 +0000 UTC m=+25.297917279 (delta=89.312935ms)
	I1007 10:46:26.057360   23621 fix.go:200] guest clock delta is within tolerance: 89.312935ms
	I1007 10:46:26.057368   23621 start.go:83] releasing machines lock for "ha-406505", held for 25.297953369s
	I1007 10:46:26.057394   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.057664   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:26.060710   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.061183   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:26.061235   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.061454   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.061984   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.062147   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.062276   23621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:46:26.062317   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:26.062353   23621 ssh_runner.go:195] Run: cat /version.json
	I1007 10:46:26.062375   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:26.065089   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065433   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065561   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:26.065589   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065720   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:26.065828   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:26.065853   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065883   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:26.065971   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:26.066095   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:26.066095   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:26.066234   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:26.066283   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:26.066351   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:26.174687   23621 ssh_runner.go:195] Run: systemctl --version
	I1007 10:46:26.181055   23621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:46:26.339685   23621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:46:26.346234   23621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:46:26.346285   23621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:46:26.362376   23621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:46:26.362399   23621 start.go:495] detecting cgroup driver to use...
	I1007 10:46:26.362452   23621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:46:26.378080   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:46:26.392505   23621 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:46:26.392560   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:46:26.406784   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:46:26.422960   23621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:46:26.552971   23621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:46:26.690240   23621 docker.go:233] disabling docker service ...
	I1007 10:46:26.690309   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:46:26.706428   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:46:26.721025   23621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:46:26.853079   23621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:46:26.978324   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:46:26.994454   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:46:27.014137   23621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:46:27.014198   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.025749   23621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:46:27.025816   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.037748   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.049263   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.062174   23621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:46:27.074940   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.086608   23621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.104859   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.116719   23621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:46:27.127669   23621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:46:27.127745   23621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:46:27.142518   23621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:46:27.153045   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:46:27.275924   23621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:46:27.373391   23621 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:46:27.373475   23621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:46:27.378225   23621 start.go:563] Will wait 60s for crictl version
	I1007 10:46:27.378286   23621 ssh_runner.go:195] Run: which crictl
	I1007 10:46:27.382179   23621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:46:27.423267   23621 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:46:27.423395   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:46:27.453236   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:46:27.483657   23621 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:46:27.484938   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:27.487606   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:27.487998   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:27.488028   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:27.488343   23621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:46:27.492528   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:46:27.506306   23621 kubeadm.go:883] updating cluster {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:46:27.506405   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:46:27.506452   23621 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:46:27.539872   23621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 10:46:27.539951   23621 ssh_runner.go:195] Run: which lz4
	I1007 10:46:27.544145   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 10:46:27.544248   23621 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 10:46:27.549024   23621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 10:46:27.549064   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 10:46:28.958319   23621 crio.go:462] duration metric: took 1.414106826s to copy over tarball
	I1007 10:46:28.958395   23621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 10:46:30.997682   23621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.039251996s)
	I1007 10:46:30.997713   23621 crio.go:469] duration metric: took 2.039368509s to extract the tarball
	I1007 10:46:30.997720   23621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 10:46:31.039009   23621 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:46:31.088841   23621 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:46:31.088866   23621 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:46:31.088873   23621 kubeadm.go:934] updating node { 192.168.39.250 8443 v1.31.1 crio true true} ...
	I1007 10:46:31.089007   23621 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:46:31.089099   23621 ssh_runner.go:195] Run: crio config
	I1007 10:46:31.133611   23621 cni.go:84] Creating CNI manager for ""
	I1007 10:46:31.133634   23621 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 10:46:31.133642   23621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:46:31.133662   23621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406505 NodeName:ha-406505 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:46:31.133799   23621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406505"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:46:31.133825   23621 kube-vip.go:115] generating kube-vip config ...
	I1007 10:46:31.133864   23621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:46:31.150299   23621 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:46:31.150386   23621 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:46:31.150432   23621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:46:31.160704   23621 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:46:31.160771   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 10:46:31.170635   23621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 10:46:31.188233   23621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:46:31.205276   23621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 10:46:31.222191   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1007 10:46:31.240224   23621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:46:31.244214   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:46:31.257345   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:46:31.397967   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:46:31.417027   23621 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.250
	I1007 10:46:31.417077   23621 certs.go:194] generating shared ca certs ...
	I1007 10:46:31.417100   23621 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.417284   23621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:46:31.417383   23621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:46:31.417398   23621 certs.go:256] generating profile certs ...
	I1007 10:46:31.417447   23621 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:46:31.417461   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt with IP's: []
	I1007 10:46:31.468016   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt ...
	I1007 10:46:31.468047   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt: {Name:mk762d603dc2fbb5c1297f6a7a3cc345fce24083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.468271   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key ...
	I1007 10:46:31.468286   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key: {Name:mk7067411a96e86ff81d9c76638d9b65fd88775f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.468374   23621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad
	I1007 10:46:31.468389   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.254]
	I1007 10:46:31.560197   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad ...
	I1007 10:46:31.560235   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad: {Name:mk03ccdd590c02d4a8e3fdabb8ce2b00441c3bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.560434   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad ...
	I1007 10:46:31.560450   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad: {Name:mk9acbd48737ac1a11351bcc3c9e01a19e35889d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.560533   23621 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:46:31.560605   23621 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:46:31.560660   23621 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:46:31.560674   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt with IP's: []
	I1007 10:46:31.824715   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt ...
	I1007 10:46:31.824745   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt: {Name:mk2f87794c4b3ce39df0df4382fd33d9633bb32b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.824924   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key ...
	I1007 10:46:31.824937   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key: {Name:mka71f56202903b2b66df7c3367c064cbfe379ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.825016   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:46:31.825037   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:46:31.825053   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:46:31.825068   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:46:31.825083   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:46:31.825098   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:46:31.825112   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:46:31.825130   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:46:31.825188   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:46:31.825225   23621 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:46:31.825236   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:46:31.825267   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:46:31.825296   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:46:31.825321   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:46:31.825363   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:46:31.825391   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:31.825407   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:46:31.825421   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:46:31.825934   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:46:31.854979   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:46:31.881623   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:46:31.908276   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:46:31.933657   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 10:46:31.959947   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:46:31.985851   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:46:32.010600   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:46:32.035549   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:46:32.060173   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:46:32.084842   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:46:32.110513   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:46:32.129118   23621 ssh_runner.go:195] Run: openssl version
	I1007 10:46:32.134991   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:46:32.146083   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:46:32.150750   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:46:32.150813   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:46:32.156917   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:46:32.167842   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:46:32.179302   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:46:32.184104   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:46:32.184166   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:46:32.189957   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:46:32.203820   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:46:32.218928   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:32.223877   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:32.223932   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:32.234358   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:46:32.254776   23621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:46:32.262324   23621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:46:32.262372   23621 kubeadm.go:392] StartCluster: {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:46:32.262436   23621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:46:32.262503   23621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:46:32.310104   23621 cri.go:89] found id: ""
	I1007 10:46:32.310161   23621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 10:46:32.319996   23621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 10:46:32.329800   23621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 10:46:32.339655   23621 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 10:46:32.339683   23621 kubeadm.go:157] found existing configuration files:
	
	I1007 10:46:32.339722   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 10:46:32.348661   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 10:46:32.348719   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 10:46:32.358855   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 10:46:32.368082   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 10:46:32.368138   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 10:46:32.378072   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 10:46:32.387338   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 10:46:32.387394   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 10:46:32.397186   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 10:46:32.406684   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 10:46:32.406738   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 10:46:32.417090   23621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 10:46:32.545879   23621 kubeadm.go:310] W1007 10:46:32.529591     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:46:32.546834   23621 kubeadm.go:310] W1007 10:46:32.530709     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:46:32.656304   23621 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 10:46:43.090298   23621 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 10:46:43.090373   23621 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 10:46:43.090492   23621 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 10:46:43.090653   23621 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 10:46:43.090862   23621 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 10:46:43.090964   23621 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 10:46:43.092688   23621 out.go:235]   - Generating certificates and keys ...
	I1007 10:46:43.092759   23621 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 10:46:43.092833   23621 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 10:46:43.092901   23621 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 10:46:43.092950   23621 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 10:46:43.092999   23621 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 10:46:43.093054   23621 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 10:46:43.093106   23621 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 10:46:43.093205   23621 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-406505 localhost] and IPs [192.168.39.250 127.0.0.1 ::1]
	I1007 10:46:43.093261   23621 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 10:46:43.093417   23621 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-406505 localhost] and IPs [192.168.39.250 127.0.0.1 ::1]
	I1007 10:46:43.093514   23621 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 10:46:43.093567   23621 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 10:46:43.093623   23621 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 10:46:43.093706   23621 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 10:46:43.093782   23621 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 10:46:43.093856   23621 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 10:46:43.093933   23621 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 10:46:43.094023   23621 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 10:46:43.094096   23621 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 10:46:43.094210   23621 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 10:46:43.094282   23621 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 10:46:43.095798   23621 out.go:235]   - Booting up control plane ...
	I1007 10:46:43.095884   23621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 10:46:43.095971   23621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 10:46:43.096065   23621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 10:46:43.096171   23621 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 10:46:43.096294   23621 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 10:46:43.096350   23621 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 10:46:43.096510   23621 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 10:46:43.096664   23621 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 10:46:43.096745   23621 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.992623ms
	I1007 10:46:43.096840   23621 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 10:46:43.096957   23621 kubeadm.go:310] [api-check] The API server is healthy after 6.063891261s
	I1007 10:46:43.097083   23621 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 10:46:43.097207   23621 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 10:46:43.097264   23621 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 10:46:43.097410   23621 kubeadm.go:310] [mark-control-plane] Marking the node ha-406505 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 10:46:43.097470   23621 kubeadm.go:310] [bootstrap-token] Using token: wypuxz.8mosh3hhf4vr1jtg
	I1007 10:46:43.098950   23621 out.go:235]   - Configuring RBAC rules ...
	I1007 10:46:43.099071   23621 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 10:46:43.099163   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 10:46:43.099343   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 10:46:43.099509   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 10:46:43.099662   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 10:46:43.099752   23621 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 10:46:43.099910   23621 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 10:46:43.099999   23621 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 10:46:43.100092   23621 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 10:46:43.100101   23621 kubeadm.go:310] 
	I1007 10:46:43.100184   23621 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 10:46:43.100194   23621 kubeadm.go:310] 
	I1007 10:46:43.100298   23621 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 10:46:43.100307   23621 kubeadm.go:310] 
	I1007 10:46:43.100344   23621 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 10:46:43.100433   23621 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 10:46:43.100524   23621 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 10:46:43.100533   23621 kubeadm.go:310] 
	I1007 10:46:43.100614   23621 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 10:46:43.100626   23621 kubeadm.go:310] 
	I1007 10:46:43.100698   23621 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 10:46:43.100713   23621 kubeadm.go:310] 
	I1007 10:46:43.100756   23621 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 10:46:43.100822   23621 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 10:46:43.100914   23621 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 10:46:43.100930   23621 kubeadm.go:310] 
	I1007 10:46:43.101035   23621 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 10:46:43.101136   23621 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 10:46:43.101145   23621 kubeadm.go:310] 
	I1007 10:46:43.101255   23621 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wypuxz.8mosh3hhf4vr1jtg \
	I1007 10:46:43.101367   23621 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df \
	I1007 10:46:43.101400   23621 kubeadm.go:310] 	--control-plane 
	I1007 10:46:43.101407   23621 kubeadm.go:310] 
	I1007 10:46:43.101475   23621 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 10:46:43.101485   23621 kubeadm.go:310] 
	I1007 10:46:43.101546   23621 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wypuxz.8mosh3hhf4vr1jtg \
	I1007 10:46:43.101655   23621 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df 
	I1007 10:46:43.101680   23621 cni.go:84] Creating CNI manager for ""
	I1007 10:46:43.101688   23621 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 10:46:43.103490   23621 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 10:46:43.104857   23621 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 10:46:43.110599   23621 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 10:46:43.110619   23621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 10:46:43.132034   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 10:46:43.562211   23621 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 10:46:43.562270   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:43.562324   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406505 minikube.k8s.io/updated_at=2024_10_07T10_46_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=ha-406505 minikube.k8s.io/primary=true
	I1007 10:46:43.616727   23621 ops.go:34] apiserver oom_adj: -16
	I1007 10:46:43.782316   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:44.282755   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:44.782532   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:45.283204   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:45.783063   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:46.283266   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:46.783411   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:46.943992   23621 kubeadm.go:1113] duration metric: took 3.381769921s to wait for elevateKubeSystemPrivileges
	I1007 10:46:46.944035   23621 kubeadm.go:394] duration metric: took 14.681663569s to StartCluster
	I1007 10:46:46.944056   23621 settings.go:142] acquiring lock: {Name:mk699f217216dbe513edf6a42c79fe85f8c20124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:46.944147   23621 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:46:46.945102   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/kubeconfig: {Name:mkc8a5ce1dbafe55e056433fff5c065506f83346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:46.945388   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 10:46:46.945386   23621 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:46:46.945413   23621 start.go:241] waiting for startup goroutines ...
	I1007 10:46:46.945429   23621 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 10:46:46.945523   23621 addons.go:69] Setting storage-provisioner=true in profile "ha-406505"
	I1007 10:46:46.945543   23621 addons.go:234] Setting addon storage-provisioner=true in "ha-406505"
	I1007 10:46:46.945553   23621 addons.go:69] Setting default-storageclass=true in profile "ha-406505"
	I1007 10:46:46.945572   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:46:46.945583   23621 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406505"
	I1007 10:46:46.945607   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:46:46.946008   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.946009   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.946088   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.946051   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.961784   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1007 10:46:46.961861   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42021
	I1007 10:46:46.962343   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.962400   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.962845   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.962858   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.962977   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.962998   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.963231   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.963434   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.963629   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:46.963828   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.963879   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.966424   23621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:46:46.966748   23621 kapi.go:59] client config for ha-406505: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key", CAFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 10:46:46.967326   23621 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 10:46:46.967544   23621 addons.go:234] Setting addon default-storageclass=true in "ha-406505"
	I1007 10:46:46.967595   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:46:46.967974   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.968044   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.980041   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40697
	I1007 10:46:46.980679   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.981275   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.981307   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.981679   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.981861   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:46.982917   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I1007 10:46:46.983418   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.983677   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:46.983888   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.983902   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.984223   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.984726   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.984780   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.985635   23621 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 10:46:46.986794   23621 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:46:46.986811   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 10:46:46.986827   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:46.990137   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:46.990593   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:46.990630   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:46.990792   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:46.990980   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:46.991153   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:46.991295   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:47.000938   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I1007 10:46:47.001317   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:47.001822   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:47.001835   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:47.002157   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:47.002359   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:47.004192   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:47.004381   23621 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 10:46:47.004396   23621 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 10:46:47.004415   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:47.007286   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:47.007709   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:47.007733   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:47.007859   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:47.008018   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:47.008149   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:47.008248   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:47.195335   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 10:46:47.217916   23621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:46:47.332630   23621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 10:46:47.810865   23621 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 10:46:48.064696   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.064705   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.064720   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.064727   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.064985   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.065031   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.065048   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.065053   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.065058   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.064988   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.065100   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.065116   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.065125   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.065104   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.065227   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.065239   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.066429   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.066481   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.066520   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.066607   23621 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 10:46:48.066629   23621 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 10:46:48.066712   23621 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1007 10:46:48.066721   23621 round_trippers.go:469] Request Headers:
	I1007 10:46:48.066729   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:46:48.066749   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:46:48.079736   23621 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1007 10:46:48.080394   23621 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1007 10:46:48.080409   23621 round_trippers.go:469] Request Headers:
	I1007 10:46:48.080417   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:46:48.080421   23621 round_trippers.go:473]     Content-Type: application/json
	I1007 10:46:48.080424   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:46:48.082744   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:46:48.082873   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.082885   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.083166   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.083174   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.083188   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.084834   23621 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 10:46:48.085997   23621 addons.go:510] duration metric: took 1.140572645s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 10:46:48.086031   23621 start.go:246] waiting for cluster config update ...
	I1007 10:46:48.086044   23621 start.go:255] writing updated cluster config ...
	I1007 10:46:48.087964   23621 out.go:201] 
	I1007 10:46:48.089528   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:46:48.089609   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:48.091151   23621 out.go:177] * Starting "ha-406505-m02" control-plane node in "ha-406505" cluster
	I1007 10:46:48.092447   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:46:48.092473   23621 cache.go:56] Caching tarball of preloaded images
	I1007 10:46:48.092563   23621 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:46:48.092574   23621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:46:48.092637   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:48.092794   23621 start.go:360] acquireMachinesLock for ha-406505-m02: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:46:48.092831   23621 start.go:364] duration metric: took 21.347µs to acquireMachinesLock for "ha-406505-m02"
	I1007 10:46:48.092855   23621 start.go:93] Provisioning new machine with config: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:46:48.092915   23621 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1007 10:46:48.094418   23621 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 10:46:48.094505   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:48.094537   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:48.110315   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I1007 10:46:48.110866   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:48.111379   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:48.111403   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:48.111770   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:48.111953   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:46:48.112082   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:46:48.112219   23621 start.go:159] libmachine.API.Create for "ha-406505" (driver="kvm2")
	I1007 10:46:48.112248   23621 client.go:168] LocalClient.Create starting
	I1007 10:46:48.112287   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:46:48.112335   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:48.112356   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:48.112422   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:46:48.112452   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:48.112468   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:48.112494   23621 main.go:141] libmachine: Running pre-create checks...
	I1007 10:46:48.112506   23621 main.go:141] libmachine: (ha-406505-m02) Calling .PreCreateCheck
	I1007 10:46:48.112657   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetConfigRaw
	I1007 10:46:48.113018   23621 main.go:141] libmachine: Creating machine...
	I1007 10:46:48.113031   23621 main.go:141] libmachine: (ha-406505-m02) Calling .Create
	I1007 10:46:48.113183   23621 main.go:141] libmachine: (ha-406505-m02) Creating KVM machine...
	I1007 10:46:48.114398   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found existing default KVM network
	I1007 10:46:48.114519   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found existing private KVM network mk-ha-406505
	I1007 10:46:48.114657   23621 main.go:141] libmachine: (ha-406505-m02) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02 ...
	I1007 10:46:48.114682   23621 main.go:141] libmachine: (ha-406505-m02) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:46:48.114793   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.114651   23988 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:48.114857   23621 main.go:141] libmachine: (ha-406505-m02) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:46:48.352057   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.351887   23988 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa...
	I1007 10:46:48.484305   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.484165   23988 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/ha-406505-m02.rawdisk...
	I1007 10:46:48.484357   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Writing magic tar header
	I1007 10:46:48.484379   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Writing SSH key tar header
	I1007 10:46:48.484391   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.484280   23988 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02 ...
	I1007 10:46:48.484403   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02 (perms=drwx------)
	I1007 10:46:48.484420   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:46:48.484433   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02
	I1007 10:46:48.484444   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:46:48.484459   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:46:48.484478   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:46:48.484491   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:46:48.484510   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:46:48.484523   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:48.484535   23621 main.go:141] libmachine: (ha-406505-m02) Creating domain...
	I1007 10:46:48.484554   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:46:48.484571   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:46:48.484583   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:46:48.484602   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home
	I1007 10:46:48.484618   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Skipping /home - not owner
	I1007 10:46:48.485445   23621 main.go:141] libmachine: (ha-406505-m02) define libvirt domain using xml: 
	I1007 10:46:48.485473   23621 main.go:141] libmachine: (ha-406505-m02) <domain type='kvm'>
	I1007 10:46:48.485489   23621 main.go:141] libmachine: (ha-406505-m02)   <name>ha-406505-m02</name>
	I1007 10:46:48.485497   23621 main.go:141] libmachine: (ha-406505-m02)   <memory unit='MiB'>2200</memory>
	I1007 10:46:48.485528   23621 main.go:141] libmachine: (ha-406505-m02)   <vcpu>2</vcpu>
	I1007 10:46:48.485552   23621 main.go:141] libmachine: (ha-406505-m02)   <features>
	I1007 10:46:48.485563   23621 main.go:141] libmachine: (ha-406505-m02)     <acpi/>
	I1007 10:46:48.485574   23621 main.go:141] libmachine: (ha-406505-m02)     <apic/>
	I1007 10:46:48.485584   23621 main.go:141] libmachine: (ha-406505-m02)     <pae/>
	I1007 10:46:48.485596   23621 main.go:141] libmachine: (ha-406505-m02)     
	I1007 10:46:48.485608   23621 main.go:141] libmachine: (ha-406505-m02)   </features>
	I1007 10:46:48.485625   23621 main.go:141] libmachine: (ha-406505-m02)   <cpu mode='host-passthrough'>
	I1007 10:46:48.485637   23621 main.go:141] libmachine: (ha-406505-m02)   
	I1007 10:46:48.485645   23621 main.go:141] libmachine: (ha-406505-m02)   </cpu>
	I1007 10:46:48.485659   23621 main.go:141] libmachine: (ha-406505-m02)   <os>
	I1007 10:46:48.485671   23621 main.go:141] libmachine: (ha-406505-m02)     <type>hvm</type>
	I1007 10:46:48.485684   23621 main.go:141] libmachine: (ha-406505-m02)     <boot dev='cdrom'/>
	I1007 10:46:48.485699   23621 main.go:141] libmachine: (ha-406505-m02)     <boot dev='hd'/>
	I1007 10:46:48.485712   23621 main.go:141] libmachine: (ha-406505-m02)     <bootmenu enable='no'/>
	I1007 10:46:48.485721   23621 main.go:141] libmachine: (ha-406505-m02)   </os>
	I1007 10:46:48.485801   23621 main.go:141] libmachine: (ha-406505-m02)   <devices>
	I1007 10:46:48.485824   23621 main.go:141] libmachine: (ha-406505-m02)     <disk type='file' device='cdrom'>
	I1007 10:46:48.485840   23621 main.go:141] libmachine: (ha-406505-m02)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/boot2docker.iso'/>
	I1007 10:46:48.485854   23621 main.go:141] libmachine: (ha-406505-m02)       <target dev='hdc' bus='scsi'/>
	I1007 10:46:48.485865   23621 main.go:141] libmachine: (ha-406505-m02)       <readonly/>
	I1007 10:46:48.485875   23621 main.go:141] libmachine: (ha-406505-m02)     </disk>
	I1007 10:46:48.485902   23621 main.go:141] libmachine: (ha-406505-m02)     <disk type='file' device='disk'>
	I1007 10:46:48.485924   23621 main.go:141] libmachine: (ha-406505-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:46:48.485938   23621 main.go:141] libmachine: (ha-406505-m02)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/ha-406505-m02.rawdisk'/>
	I1007 10:46:48.485950   23621 main.go:141] libmachine: (ha-406505-m02)       <target dev='hda' bus='virtio'/>
	I1007 10:46:48.485972   23621 main.go:141] libmachine: (ha-406505-m02)     </disk>
	I1007 10:46:48.485982   23621 main.go:141] libmachine: (ha-406505-m02)     <interface type='network'>
	I1007 10:46:48.485991   23621 main.go:141] libmachine: (ha-406505-m02)       <source network='mk-ha-406505'/>
	I1007 10:46:48.485999   23621 main.go:141] libmachine: (ha-406505-m02)       <model type='virtio'/>
	I1007 10:46:48.486005   23621 main.go:141] libmachine: (ha-406505-m02)     </interface>
	I1007 10:46:48.486013   23621 main.go:141] libmachine: (ha-406505-m02)     <interface type='network'>
	I1007 10:46:48.486025   23621 main.go:141] libmachine: (ha-406505-m02)       <source network='default'/>
	I1007 10:46:48.486034   23621 main.go:141] libmachine: (ha-406505-m02)       <model type='virtio'/>
	I1007 10:46:48.486044   23621 main.go:141] libmachine: (ha-406505-m02)     </interface>
	I1007 10:46:48.486053   23621 main.go:141] libmachine: (ha-406505-m02)     <serial type='pty'>
	I1007 10:46:48.486063   23621 main.go:141] libmachine: (ha-406505-m02)       <target port='0'/>
	I1007 10:46:48.486074   23621 main.go:141] libmachine: (ha-406505-m02)     </serial>
	I1007 10:46:48.486084   23621 main.go:141] libmachine: (ha-406505-m02)     <console type='pty'>
	I1007 10:46:48.486094   23621 main.go:141] libmachine: (ha-406505-m02)       <target type='serial' port='0'/>
	I1007 10:46:48.486098   23621 main.go:141] libmachine: (ha-406505-m02)     </console>
	I1007 10:46:48.486106   23621 main.go:141] libmachine: (ha-406505-m02)     <rng model='virtio'>
	I1007 10:46:48.486122   23621 main.go:141] libmachine: (ha-406505-m02)       <backend model='random'>/dev/random</backend>
	I1007 10:46:48.486134   23621 main.go:141] libmachine: (ha-406505-m02)     </rng>
	I1007 10:46:48.486147   23621 main.go:141] libmachine: (ha-406505-m02)     
	I1007 10:46:48.486157   23621 main.go:141] libmachine: (ha-406505-m02)     
	I1007 10:46:48.486167   23621 main.go:141] libmachine: (ha-406505-m02)   </devices>
	I1007 10:46:48.486184   23621 main.go:141] libmachine: (ha-406505-m02) </domain>
	I1007 10:46:48.486192   23621 main.go:141] libmachine: (ha-406505-m02) 
	I1007 10:46:48.492959   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:11:dc:7d in network default
	I1007 10:46:48.493532   23621 main.go:141] libmachine: (ha-406505-m02) Ensuring networks are active...
	I1007 10:46:48.493555   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:48.494204   23621 main.go:141] libmachine: (ha-406505-m02) Ensuring network default is active
	I1007 10:46:48.494531   23621 main.go:141] libmachine: (ha-406505-m02) Ensuring network mk-ha-406505 is active
	I1007 10:46:48.494994   23621 main.go:141] libmachine: (ha-406505-m02) Getting domain xml...
	I1007 10:46:48.495697   23621 main.go:141] libmachine: (ha-406505-m02) Creating domain...
	I1007 10:46:49.708066   23621 main.go:141] libmachine: (ha-406505-m02) Waiting to get IP...
	I1007 10:46:49.709797   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:49.710242   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:49.710274   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:49.710223   23988 retry.go:31] will retry after 204.773065ms: waiting for machine to come up
	I1007 10:46:49.916620   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:49.917029   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:49.917049   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:49.916992   23988 retry.go:31] will retry after 235.714104ms: waiting for machine to come up
	I1007 10:46:50.154409   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:50.154821   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:50.154854   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:50.154800   23988 retry.go:31] will retry after 473.988416ms: waiting for machine to come up
	I1007 10:46:50.630146   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:50.630593   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:50.630617   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:50.630561   23988 retry.go:31] will retry after 436.51933ms: waiting for machine to come up
	I1007 10:46:51.068126   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:51.068602   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:51.068629   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:51.068593   23988 retry.go:31] will retry after 554.772898ms: waiting for machine to come up
	I1007 10:46:51.625423   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:51.625799   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:51.625821   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:51.625760   23988 retry.go:31] will retry after 790.073775ms: waiting for machine to come up
	I1007 10:46:52.417715   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:52.418041   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:52.418068   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:52.417996   23988 retry.go:31] will retry after 1.143940138s: waiting for machine to come up
	I1007 10:46:53.563665   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:53.564172   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:53.564191   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:53.564119   23988 retry.go:31] will retry after 1.216262675s: waiting for machine to come up
	I1007 10:46:54.782182   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:54.782642   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:54.782668   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:54.782571   23988 retry.go:31] will retry after 1.336251943s: waiting for machine to come up
	I1007 10:46:56.120924   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:56.121343   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:56.121364   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:56.121297   23988 retry.go:31] will retry after 2.26253824s: waiting for machine to come up
	I1007 10:46:58.385702   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:58.386103   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:58.386134   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:58.386057   23988 retry.go:31] will retry after 1.827723489s: waiting for machine to come up
	I1007 10:47:00.215316   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:00.215726   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:47:00.215747   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:47:00.215701   23988 retry.go:31] will retry after 2.599258612s: waiting for machine to come up
	I1007 10:47:02.818331   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:02.818781   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:47:02.818803   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:47:02.818737   23988 retry.go:31] will retry after 3.193038382s: waiting for machine to come up
	I1007 10:47:06.014368   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:06.014784   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:47:06.014809   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:47:06.014743   23988 retry.go:31] will retry after 3.576827994s: waiting for machine to come up
	I1007 10:47:09.593923   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:09.594365   23621 main.go:141] libmachine: (ha-406505-m02) Found IP for machine: 192.168.39.37
	I1007 10:47:09.594385   23621 main.go:141] libmachine: (ha-406505-m02) Reserving static IP address...
	I1007 10:47:09.594399   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has current primary IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:09.594746   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find host DHCP lease matching {name: "ha-406505-m02", mac: "52:54:00:c4:d0:65", ip: "192.168.39.37"} in network mk-ha-406505
	I1007 10:47:09.668479   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Getting to WaitForSSH function...
	I1007 10:47:09.668509   23621 main.go:141] libmachine: (ha-406505-m02) Reserved static IP address: 192.168.39.37
	I1007 10:47:09.668519   23621 main.go:141] libmachine: (ha-406505-m02) Waiting for SSH to be available...
	I1007 10:47:09.670956   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:09.671275   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505
	I1007 10:47:09.671303   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find defined IP address of network mk-ha-406505 interface with MAC address 52:54:00:c4:d0:65
	I1007 10:47:09.671456   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH client type: external
	I1007 10:47:09.671481   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa (-rw-------)
	I1007 10:47:09.671540   23621 main.go:141] libmachine: (ha-406505-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:47:09.671566   23621 main.go:141] libmachine: (ha-406505-m02) DBG | About to run SSH command:
	I1007 10:47:09.671585   23621 main.go:141] libmachine: (ha-406505-m02) DBG | exit 0
	I1007 10:47:09.675078   23621 main.go:141] libmachine: (ha-406505-m02) DBG | SSH cmd err, output: exit status 255: 
	I1007 10:47:09.675099   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 10:47:09.675105   23621 main.go:141] libmachine: (ha-406505-m02) DBG | command : exit 0
	I1007 10:47:09.675110   23621 main.go:141] libmachine: (ha-406505-m02) DBG | err     : exit status 255
	I1007 10:47:09.675118   23621 main.go:141] libmachine: (ha-406505-m02) DBG | output  : 
	I1007 10:47:12.677242   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Getting to WaitForSSH function...
	I1007 10:47:12.679802   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.680241   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:12.680268   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.680410   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH client type: external
	I1007 10:47:12.680433   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa (-rw-------)
	I1007 10:47:12.680466   23621 main.go:141] libmachine: (ha-406505-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:47:12.680481   23621 main.go:141] libmachine: (ha-406505-m02) DBG | About to run SSH command:
	I1007 10:47:12.680494   23621 main.go:141] libmachine: (ha-406505-m02) DBG | exit 0
	I1007 10:47:12.804189   23621 main.go:141] libmachine: (ha-406505-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 10:47:12.804446   23621 main.go:141] libmachine: (ha-406505-m02) KVM machine creation complete!
	I1007 10:47:12.804774   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetConfigRaw
	I1007 10:47:12.805439   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:12.805661   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:12.805843   23621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:47:12.805857   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetState
	I1007 10:47:12.807411   23621 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:47:12.807423   23621 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:47:12.807428   23621 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:47:12.807434   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:12.809666   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.809974   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:12.810001   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.810264   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:12.810464   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.810653   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.810803   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:12.810961   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:12.811169   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:12.811184   23621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:47:12.919372   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:47:12.919420   23621 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:47:12.919430   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:12.922565   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.922966   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:12.922996   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.923171   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:12.923359   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.923510   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.923635   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:12.923785   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:12.923977   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:12.924003   23621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:47:13.033371   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:47:13.033448   23621 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:47:13.033459   23621 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:47:13.033472   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:47:13.033744   23621 buildroot.go:166] provisioning hostname "ha-406505-m02"
	I1007 10:47:13.033784   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:47:13.033956   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.036444   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.036782   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.036811   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.036919   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.037077   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.037212   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.037334   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.037500   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:13.037700   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:13.037718   23621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505-m02 && echo "ha-406505-m02" | sudo tee /etc/hostname
	I1007 10:47:13.163957   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505-m02
	
	I1007 10:47:13.164007   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.166790   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.167220   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.167245   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.167419   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.167615   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.167799   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.167934   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.168112   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:13.168270   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:13.168286   23621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:47:13.289811   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:47:13.289837   23621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:47:13.289852   23621 buildroot.go:174] setting up certificates
	I1007 10:47:13.289860   23621 provision.go:84] configureAuth start
	I1007 10:47:13.289876   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:47:13.290178   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:13.292829   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.293122   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.293145   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.293256   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.296131   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.296632   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.296661   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.296855   23621 provision.go:143] copyHostCerts
	I1007 10:47:13.296886   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:47:13.296917   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:47:13.296926   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:47:13.296997   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:47:13.297093   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:47:13.297110   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:47:13.297114   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:47:13.297137   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:47:13.297178   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:47:13.297193   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:47:13.297199   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:47:13.297219   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:47:13.297264   23621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505-m02 san=[127.0.0.1 192.168.39.37 ha-406505-m02 localhost minikube]
	I1007 10:47:13.470867   23621 provision.go:177] copyRemoteCerts
	I1007 10:47:13.470925   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:47:13.470948   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.473620   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.473865   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.473901   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.474152   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.474379   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.474538   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.474650   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:13.558906   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:47:13.558995   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:47:13.584265   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:47:13.584335   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 10:47:13.609098   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:47:13.609208   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 10:47:13.633989   23621 provision.go:87] duration metric: took 344.11512ms to configureAuth
	I1007 10:47:13.634025   23621 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:47:13.634234   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:47:13.634302   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.636945   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.637279   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.637307   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.637491   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.637663   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.637855   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.638031   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.638190   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:13.638341   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:13.638355   23621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:47:13.873602   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:47:13.873628   23621 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:47:13.873636   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetURL
	I1007 10:47:13.874889   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using libvirt version 6000000
	I1007 10:47:13.877460   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.877837   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.877860   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.878084   23621 main.go:141] libmachine: Docker is up and running!
	I1007 10:47:13.878101   23621 main.go:141] libmachine: Reticulating splines...
	I1007 10:47:13.878109   23621 client.go:171] duration metric: took 25.765852825s to LocalClient.Create
	I1007 10:47:13.878137   23621 start.go:167] duration metric: took 25.765919747s to libmachine.API.Create "ha-406505"
	I1007 10:47:13.878150   23621 start.go:293] postStartSetup for "ha-406505-m02" (driver="kvm2")
	I1007 10:47:13.878166   23621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:47:13.878189   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:13.878390   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:47:13.878411   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.880668   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.881014   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.881044   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.881180   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.881364   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.881519   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.881655   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:13.968514   23621 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:47:13.973091   23621 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:47:13.973116   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:47:13.973185   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:47:13.973262   23621 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:47:13.973272   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:47:13.973349   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:47:13.984972   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:47:14.013706   23621 start.go:296] duration metric: took 135.541721ms for postStartSetup
	I1007 10:47:14.013768   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetConfigRaw
	I1007 10:47:14.014387   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:14.017290   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.017760   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.017791   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.018011   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:47:14.018210   23621 start.go:128] duration metric: took 25.92528673s to createHost
	I1007 10:47:14.018236   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:14.020800   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.021086   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.021115   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.021288   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:14.021489   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.021660   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.021768   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:14.021952   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:14.022115   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:14.022125   23621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:47:14.132989   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298034.110680519
	
	I1007 10:47:14.133013   23621 fix.go:216] guest clock: 1728298034.110680519
	I1007 10:47:14.133022   23621 fix.go:229] Guest: 2024-10-07 10:47:14.110680519 +0000 UTC Remote: 2024-10-07 10:47:14.018221797 +0000 UTC m=+73.371361289 (delta=92.458722ms)
	I1007 10:47:14.133040   23621 fix.go:200] guest clock delta is within tolerance: 92.458722ms
	I1007 10:47:14.133051   23621 start.go:83] releasing machines lock for "ha-406505-m02", held for 26.040206453s
	I1007 10:47:14.133067   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.133299   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:14.135869   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.136305   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.136328   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.139140   23621 out.go:177] * Found network options:
	I1007 10:47:14.140689   23621 out.go:177]   - NO_PROXY=192.168.39.250
	W1007 10:47:14.142083   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:47:14.142129   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.142678   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.142868   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.142974   23621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:47:14.143014   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	W1007 10:47:14.143107   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:47:14.143184   23621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:47:14.143226   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:14.145983   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146148   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146289   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.146315   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146499   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:14.146575   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.146609   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146657   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.146758   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:14.146834   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:14.146877   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.146982   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:14.147039   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:14.147184   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:14.387899   23621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:47:14.394771   23621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:47:14.394848   23621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:47:14.410661   23621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:47:14.410689   23621 start.go:495] detecting cgroup driver to use...
	I1007 10:47:14.410772   23621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:47:14.427868   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:47:14.444153   23621 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:47:14.444206   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:47:14.460223   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:47:14.476365   23621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:47:14.606104   23621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:47:14.745910   23621 docker.go:233] disabling docker service ...
	I1007 10:47:14.745980   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:47:14.760987   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:47:14.774829   23621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:47:14.912287   23621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:47:15.035180   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:47:15.050257   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:47:15.070114   23621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:47:15.070181   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.081232   23621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:47:15.081328   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.097360   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.109085   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.120920   23621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:47:15.132712   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.143857   23621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.162242   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.173052   23621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:47:15.183576   23621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:47:15.183636   23621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:47:15.198592   23621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:47:15.209269   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:47:15.343340   23621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:47:15.435410   23621 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:47:15.435495   23621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:47:15.440650   23621 start.go:563] Will wait 60s for crictl version
	I1007 10:47:15.440716   23621 ssh_runner.go:195] Run: which crictl
	I1007 10:47:15.445010   23621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:47:15.485747   23621 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:47:15.485842   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:47:15.514633   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:47:15.544607   23621 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:47:15.546495   23621 out.go:177]   - env NO_PROXY=192.168.39.250
	I1007 10:47:15.547763   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:15.550503   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:15.550835   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:15.550856   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:15.551135   23621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:47:15.555619   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:47:15.568228   23621 mustload.go:65] Loading cluster: ha-406505
	I1007 10:47:15.568429   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:47:15.568711   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:47:15.568757   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:47:15.583930   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I1007 10:47:15.584453   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:47:15.584977   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:47:15.584999   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:47:15.585308   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:47:15.585449   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:47:15.586928   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:47:15.587242   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:47:15.587291   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:47:15.601672   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I1007 10:47:15.602061   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:47:15.602537   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:47:15.602556   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:47:15.602817   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:47:15.602964   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:47:15.603079   23621 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.37
	I1007 10:47:15.603088   23621 certs.go:194] generating shared ca certs ...
	I1007 10:47:15.603106   23621 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:47:15.603231   23621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:47:15.603292   23621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:47:15.603306   23621 certs.go:256] generating profile certs ...
	I1007 10:47:15.603393   23621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:47:15.603425   23621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39
	I1007 10:47:15.603446   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.254]
	I1007 10:47:15.744161   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39 ...
	I1007 10:47:15.744193   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39: {Name:mkae386a40e79e3b04467f9f82e8cc7ab31669ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:47:15.744370   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39 ...
	I1007 10:47:15.744387   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39: {Name:mkd96b82bea042246d2ff8a9f6d26e46ce2f8d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:47:15.744484   23621 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:47:15.744631   23621 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:47:15.744793   23621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:47:15.744812   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:47:15.744830   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:47:15.744846   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:47:15.744865   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:47:15.744882   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:47:15.744900   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:47:15.744919   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:47:15.744937   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:47:15.745001   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:47:15.745040   23621 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:47:15.745053   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:47:15.745085   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:47:15.745117   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:47:15.745148   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:47:15.745217   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:47:15.745255   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:15.745278   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:47:15.745298   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:47:15.745339   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:47:15.748712   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:15.749114   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:47:15.749137   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:15.749337   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:47:15.749533   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:47:15.749703   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:47:15.749841   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:47:15.828372   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 10:47:15.833129   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 10:47:15.845052   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 10:47:15.849337   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 10:47:15.859666   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 10:47:15.864073   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 10:47:15.882571   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 10:47:15.888480   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1007 10:47:15.901431   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 10:47:15.905968   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 10:47:15.922566   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 10:47:15.927045   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 10:47:15.940895   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:47:15.967974   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:47:15.993940   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:47:16.018147   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:47:16.043434   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 10:47:16.069121   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:47:16.093333   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:47:16.117209   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:47:16.141941   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:47:16.166358   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:47:16.191390   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:47:16.216168   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 10:47:16.233270   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 10:47:16.250510   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 10:47:16.267543   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1007 10:47:16.287073   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 10:47:16.306608   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 10:47:16.324070   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 10:47:16.341221   23621 ssh_runner.go:195] Run: openssl version
	I1007 10:47:16.347150   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:47:16.358131   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:47:16.362824   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:47:16.362874   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:47:16.368599   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:47:16.378927   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:47:16.389775   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:16.394445   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:16.394503   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:16.400151   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:47:16.410835   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:47:16.421451   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:47:16.425954   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:47:16.426044   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:47:16.432023   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:47:16.443765   23621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:47:16.448499   23621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:47:16.448550   23621 kubeadm.go:934] updating node {m02 192.168.39.37 8443 v1.31.1 crio true true} ...
	I1007 10:47:16.448621   23621 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:47:16.448641   23621 kube-vip.go:115] generating kube-vip config ...
	I1007 10:47:16.448674   23621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:47:16.465324   23621 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:47:16.465389   23621 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:47:16.465443   23621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:47:16.476363   23621 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 10:47:16.476434   23621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 10:47:16.487040   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 10:47:16.487085   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:47:16.487142   23621 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1007 10:47:16.487150   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:47:16.487275   23621 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1007 10:47:16.491771   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 10:47:16.491798   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 10:47:17.509026   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:47:17.524363   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:47:17.524452   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:47:17.528672   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 10:47:17.528709   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 10:47:17.599765   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:47:17.599853   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:47:17.612766   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 10:47:17.612810   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 10:47:18.077437   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 10:47:18.088177   23621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 10:47:18.105381   23621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:47:18.122405   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:47:18.142555   23621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:47:18.146470   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:47:18.159594   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:47:18.291092   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:47:18.309170   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:47:18.309657   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:47:18.309712   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:47:18.324913   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I1007 10:47:18.325340   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:47:18.325803   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:47:18.325831   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:47:18.326166   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:47:18.326334   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:47:18.326443   23621 start.go:317] joinCluster: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:47:18.326602   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 10:47:18.326630   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:47:18.329583   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:18.329975   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:47:18.330001   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:18.330140   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:47:18.330306   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:47:18.330451   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:47:18.330595   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:47:18.480055   23621 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:47:18.480129   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hab5tp.p59kud3l77ixefj4 --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m02 --control-plane --apiserver-advertise-address=192.168.39.37 --apiserver-bind-port=8443"
	I1007 10:47:40.053984   23621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hab5tp.p59kud3l77ixefj4 --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m02 --control-plane --apiserver-advertise-address=192.168.39.37 --apiserver-bind-port=8443": (21.573829794s)
	I1007 10:47:40.054022   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 10:47:40.624911   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406505-m02 minikube.k8s.io/updated_at=2024_10_07T10_47_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=ha-406505 minikube.k8s.io/primary=false
	I1007 10:47:40.773203   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-406505-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 10:47:40.895450   23621 start.go:319] duration metric: took 22.569002454s to joinCluster
	I1007 10:47:40.895532   23621 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:47:40.895833   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:47:40.897246   23621 out.go:177] * Verifying Kubernetes components...
	I1007 10:47:40.898575   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:47:41.187385   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:47:41.220775   23621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:47:41.221110   23621 kapi.go:59] client config for ha-406505: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key", CAFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 10:47:41.221195   23621 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.250:8443
	I1007 10:47:41.221469   23621 node_ready.go:35] waiting up to 6m0s for node "ha-406505-m02" to be "Ready" ...
	I1007 10:47:41.221568   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:41.221578   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:41.221589   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:41.221596   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:41.242142   23621 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1007 10:47:41.721789   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:41.721819   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:41.721830   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:41.721836   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:41.725638   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:42.222559   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:42.222582   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:42.222592   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:42.222597   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:42.226807   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:42.722633   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:42.722659   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:42.722670   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:42.722676   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:42.727142   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:43.222278   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:43.222306   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:43.222318   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:43.222325   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:43.225924   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:43.226434   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:43.722388   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:43.722413   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:43.722421   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:43.722426   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:43.726394   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:44.221754   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:44.221782   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:44.221791   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:44.221797   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:44.225377   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:44.722382   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:44.722405   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:44.722415   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:44.722421   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:44.726019   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:45.222002   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:45.222024   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:45.222035   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:45.222042   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:45.228065   23621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 10:47:45.228617   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:45.722139   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:45.722161   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:45.722169   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:45.722174   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:45.726310   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:46.221951   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:46.221984   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:46.221995   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:46.222001   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:46.226108   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:46.722407   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:46.722427   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:46.722434   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:46.722439   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:46.726228   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:47.222433   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:47.222457   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:47.222466   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:47.222471   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:47.226517   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:47.722508   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:47.722532   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:47.722541   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:47.722546   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:47.725944   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:47.726592   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:48.222456   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:48.222477   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:48.222487   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:48.222492   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:48.568208   23621 round_trippers.go:574] Response Status: 200 OK in 345 milliseconds
	I1007 10:47:48.721707   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:48.721729   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:48.721737   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:48.721740   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:48.725191   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:49.222104   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:49.222129   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:49.222137   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:49.222142   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:49.226421   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:49.722572   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:49.722597   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:49.722606   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:49.722610   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:49.726213   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:49.726960   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:50.222350   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:50.222373   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:50.222381   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:50.222384   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:50.226118   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:50.722605   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:50.722631   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:50.722640   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:50.722645   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:50.726160   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:51.221666   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:51.221694   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:51.221714   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:51.221721   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:51.225253   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:51.722133   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:51.722158   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:51.722167   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:51.722171   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:51.725645   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:52.221757   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:52.221780   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:52.221787   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:52.221792   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:52.226043   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:52.226536   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:52.721878   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:52.721905   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:52.721913   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:52.721917   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:52.725379   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:53.221755   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:53.221777   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:53.221786   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:53.221789   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:53.225585   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:53.721883   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:53.721908   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:53.721920   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:53.721925   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:53.725474   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:54.221694   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:54.221720   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:54.221731   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:54.221737   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:54.225868   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:54.226748   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:54.722061   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:54.722086   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:54.722094   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:54.722099   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:54.725979   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:55.221978   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:55.222010   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:55.222019   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:55.222022   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:55.225724   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:55.721884   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:55.721911   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:55.721924   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:55.721931   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:55.726067   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:56.222572   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:56.222595   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:56.222603   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:56.222606   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:56.227082   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:56.227824   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:56.722293   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:56.722317   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:56.722325   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:56.722329   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:56.726068   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:57.222438   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:57.222461   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:57.222469   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:57.222478   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:57.226913   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:57.722050   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:57.722075   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:57.722083   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:57.722087   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:57.726100   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:58.222538   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:58.222560   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:58.222568   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:58.222572   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:58.227033   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:58.722681   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:58.722703   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:58.722711   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:58.722717   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:58.725986   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:58.726597   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:59.221983   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:59.222007   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:59.222015   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:59.222018   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:59.225585   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:59.722632   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:59.722658   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:59.722668   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:59.722672   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:59.726213   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.222316   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:00.222339   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.222347   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.222351   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.225920   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.722449   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:00.722475   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.722484   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.722488   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.725827   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.726434   23621 node_ready.go:49] node "ha-406505-m02" has status "Ready":"True"
	I1007 10:48:00.726454   23621 node_ready.go:38] duration metric: took 19.504967744s for node "ha-406505-m02" to be "Ready" ...
	I1007 10:48:00.726462   23621 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:48:00.726536   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:00.726548   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.726555   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.726559   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.731138   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:00.737911   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.737985   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghmwd
	I1007 10:48:00.737993   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.738001   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.738005   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.741209   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.742237   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:00.742253   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.742260   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.742265   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.745097   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.745537   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.745556   23621 pod_ready.go:82] duration metric: took 7.621102ms for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.745565   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.745629   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xzc88
	I1007 10:48:00.745638   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.745645   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.745650   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.748174   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.748906   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:00.748922   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.748930   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.748936   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.751224   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.751710   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.751731   23621 pod_ready.go:82] duration metric: took 6.159383ms for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.751740   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.751799   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505
	I1007 10:48:00.751809   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.751816   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.751820   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.755074   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.755602   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:00.755617   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.755625   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.755629   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.758258   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.758840   23621 pod_ready.go:93] pod "etcd-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.758864   23621 pod_ready.go:82] duration metric: took 7.117967ms for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.758875   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.758941   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m02
	I1007 10:48:00.758951   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.758962   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.758969   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.761946   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.762531   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:00.762545   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.762555   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.762563   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.765249   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.765990   23621 pod_ready.go:93] pod "etcd-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.766010   23621 pod_ready.go:82] duration metric: took 7.127993ms for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.766024   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.923419   23621 request.go:632] Waited for 157.329652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:48:00.923504   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:48:00.923514   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.923521   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.923526   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.926903   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:01.122872   23621 request.go:632] Waited for 195.370343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.122996   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.123006   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.123014   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.123018   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.126358   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:01.127128   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:01.127149   23621 pod_ready.go:82] duration metric: took 361.118588ms for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.127159   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.322514   23621 request.go:632] Waited for 195.261429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:48:01.322571   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:48:01.322577   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.322584   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.322589   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.326760   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:01.523038   23621 request.go:632] Waited for 195.412644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:01.523093   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:01.523098   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.523105   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.523109   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.527065   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:01.527580   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:01.527599   23621 pod_ready.go:82] duration metric: took 400.432673ms for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.527611   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.722806   23621 request.go:632] Waited for 195.048611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:48:01.722880   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:48:01.722888   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.722898   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.722904   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.727096   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:01.923348   23621 request.go:632] Waited for 195.373775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.923440   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.923452   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.923463   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.923469   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.927522   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:01.927961   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:01.927977   23621 pod_ready.go:82] duration metric: took 400.359633ms for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.928001   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.123092   23621 request.go:632] Waited for 195.004556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:48:02.123150   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:48:02.123157   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.123164   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.123167   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.127404   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:02.323429   23621 request.go:632] Waited for 195.351342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.323503   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.323511   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.323522   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.323532   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.326657   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:02.327382   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:02.327399   23621 pod_ready.go:82] duration metric: took 399.387331ms for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.327409   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.522522   23621 request.go:632] Waited for 195.05566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:48:02.522601   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:48:02.522607   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.522615   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.522620   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.526624   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:02.722785   23621 request.go:632] Waited for 195.392665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.722866   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.722874   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.722885   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.722889   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.726617   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:02.727143   23621 pod_ready.go:93] pod "kube-proxy-6ng4z" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:02.727160   23621 pod_ready.go:82] duration metric: took 399.745226ms for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.727169   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.923398   23621 request.go:632] Waited for 196.154565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:48:02.923464   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:48:02.923473   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.923484   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.923492   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.926698   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.122834   23621 request.go:632] Waited for 195.347405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.122890   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.122897   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.122905   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.122909   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.126570   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.127726   23621 pod_ready.go:93] pod "kube-proxy-nlnhf" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:03.127745   23621 pod_ready.go:82] duration metric: took 400.569818ms for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.127759   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.322923   23621 request.go:632] Waited for 195.092944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:48:03.322991   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:48:03.322997   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.323004   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.323009   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.326336   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.523252   23621 request.go:632] Waited for 196.355286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.523323   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.523328   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.523336   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.523344   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.526876   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.527478   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:03.527506   23621 pod_ready.go:82] duration metric: took 399.737789ms for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.527518   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.722433   23621 request.go:632] Waited for 194.843724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:48:03.722510   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:48:03.722516   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.722524   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.722534   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.726261   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.923306   23621 request.go:632] Waited for 196.357784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:03.923362   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:03.923368   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.923375   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.923379   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.927011   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.927578   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:03.927594   23621 pod_ready.go:82] duration metric: took 400.068935ms for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.927605   23621 pod_ready.go:39] duration metric: took 3.201132108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:48:03.927618   23621 api_server.go:52] waiting for apiserver process to appear ...
	I1007 10:48:03.927663   23621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 10:48:03.942605   23621 api_server.go:72] duration metric: took 23.047005374s to wait for apiserver process to appear ...
	I1007 10:48:03.942635   23621 api_server.go:88] waiting for apiserver healthz status ...
	I1007 10:48:03.942653   23621 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I1007 10:48:03.947020   23621 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I1007 10:48:03.947103   23621 round_trippers.go:463] GET https://192.168.39.250:8443/version
	I1007 10:48:03.947113   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.947126   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.947134   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.948044   23621 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 10:48:03.948143   23621 api_server.go:141] control plane version: v1.31.1
	I1007 10:48:03.948169   23621 api_server.go:131] duration metric: took 5.525857ms to wait for apiserver health ...
	I1007 10:48:03.948178   23621 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 10:48:04.122494   23621 request.go:632] Waited for 174.227541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.122548   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.122554   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.122561   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.122565   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.127425   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:04.131821   23621 system_pods.go:59] 17 kube-system pods found
	I1007 10:48:04.131853   23621 system_pods.go:61] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:48:04.131860   23621 system_pods.go:61] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:48:04.131867   23621 system_pods.go:61] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:48:04.131873   23621 system_pods.go:61] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:48:04.131878   23621 system_pods.go:61] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:48:04.131884   23621 system_pods.go:61] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:48:04.131889   23621 system_pods.go:61] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:48:04.131893   23621 system_pods.go:61] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:48:04.131898   23621 system_pods.go:61] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:48:04.131903   23621 system_pods.go:61] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:48:04.131908   23621 system_pods.go:61] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:48:04.131914   23621 system_pods.go:61] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:48:04.131919   23621 system_pods.go:61] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:48:04.131925   23621 system_pods.go:61] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:48:04.131932   23621 system_pods.go:61] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:48:04.131939   23621 system_pods.go:61] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:48:04.131945   23621 system_pods.go:61] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:48:04.131956   23621 system_pods.go:74] duration metric: took 183.770827ms to wait for pod list to return data ...
	I1007 10:48:04.131966   23621 default_sa.go:34] waiting for default service account to be created ...
	I1007 10:48:04.323406   23621 request.go:632] Waited for 191.335119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:48:04.323466   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:48:04.323474   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.323485   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.323491   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.326946   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:04.327172   23621 default_sa.go:45] found service account: "default"
	I1007 10:48:04.327188   23621 default_sa.go:55] duration metric: took 195.21627ms for default service account to be created ...
	I1007 10:48:04.327195   23621 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 10:48:04.522586   23621 request.go:632] Waited for 195.315471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.522647   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.522653   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.522661   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.522664   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.527722   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:48:04.532291   23621 system_pods.go:86] 17 kube-system pods found
	I1007 10:48:04.532319   23621 system_pods.go:89] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:48:04.532328   23621 system_pods.go:89] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:48:04.532333   23621 system_pods.go:89] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:48:04.532338   23621 system_pods.go:89] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:48:04.532345   23621 system_pods.go:89] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:48:04.532350   23621 system_pods.go:89] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:48:04.532356   23621 system_pods.go:89] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:48:04.532362   23621 system_pods.go:89] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:48:04.532370   23621 system_pods.go:89] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:48:04.532380   23621 system_pods.go:89] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:48:04.532386   23621 system_pods.go:89] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:48:04.532395   23621 system_pods.go:89] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:48:04.532401   23621 system_pods.go:89] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:48:04.532409   23621 system_pods.go:89] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:48:04.532415   23621 system_pods.go:89] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:48:04.532422   23621 system_pods.go:89] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:48:04.532426   23621 system_pods.go:89] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:48:04.532436   23621 system_pods.go:126] duration metric: took 205.234668ms to wait for k8s-apps to be running ...
	I1007 10:48:04.532449   23621 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 10:48:04.532504   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:48:04.548000   23621 system_svc.go:56] duration metric: took 15.524581ms WaitForService to wait for kubelet
	I1007 10:48:04.548032   23621 kubeadm.go:582] duration metric: took 23.652436292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:48:04.548054   23621 node_conditions.go:102] verifying NodePressure condition ...
	I1007 10:48:04.723508   23621 request.go:632] Waited for 175.357529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes
	I1007 10:48:04.723563   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes
	I1007 10:48:04.723568   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.723576   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.723585   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.728067   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:04.728956   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:48:04.728985   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:48:04.728999   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:48:04.729004   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:48:04.729010   23621 node_conditions.go:105] duration metric: took 180.950188ms to run NodePressure ...
	I1007 10:48:04.729032   23621 start.go:241] waiting for startup goroutines ...
	I1007 10:48:04.729064   23621 start.go:255] writing updated cluster config ...
	I1007 10:48:04.731245   23621 out.go:201] 
	I1007 10:48:04.732721   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:48:04.732820   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:48:04.734501   23621 out.go:177] * Starting "ha-406505-m03" control-plane node in "ha-406505" cluster
	I1007 10:48:04.735780   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:48:04.735806   23621 cache.go:56] Caching tarball of preloaded images
	I1007 10:48:04.735908   23621 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:48:04.735925   23621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:48:04.736053   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:48:04.736293   23621 start.go:360] acquireMachinesLock for ha-406505-m03: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:48:04.736354   23621 start.go:364] duration metric: took 34.69µs to acquireMachinesLock for "ha-406505-m03"
	I1007 10:48:04.736376   23621 start.go:93] Provisioning new machine with config: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:48:04.736511   23621 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1007 10:48:04.738190   23621 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 10:48:04.738285   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:04.738332   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:04.754047   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32911
	I1007 10:48:04.754525   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:04.754992   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:04.755012   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:04.755365   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:04.755518   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:04.755655   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:04.755786   23621 start.go:159] libmachine.API.Create for "ha-406505" (driver="kvm2")
	I1007 10:48:04.755817   23621 client.go:168] LocalClient.Create starting
	I1007 10:48:04.755857   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:48:04.755899   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:48:04.755923   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:48:04.755968   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:48:04.755997   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:48:04.756011   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:48:04.756031   23621 main.go:141] libmachine: Running pre-create checks...
	I1007 10:48:04.756042   23621 main.go:141] libmachine: (ha-406505-m03) Calling .PreCreateCheck
	I1007 10:48:04.756216   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetConfigRaw
	I1007 10:48:04.756599   23621 main.go:141] libmachine: Creating machine...
	I1007 10:48:04.756611   23621 main.go:141] libmachine: (ha-406505-m03) Calling .Create
	I1007 10:48:04.756765   23621 main.go:141] libmachine: (ha-406505-m03) Creating KVM machine...
	I1007 10:48:04.757963   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found existing default KVM network
	I1007 10:48:04.758099   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found existing private KVM network mk-ha-406505
	I1007 10:48:04.758232   23621 main.go:141] libmachine: (ha-406505-m03) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03 ...
	I1007 10:48:04.758273   23621 main.go:141] libmachine: (ha-406505-m03) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:48:04.758345   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:04.758258   24407 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:48:04.758425   23621 main.go:141] libmachine: (ha-406505-m03) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:48:05.006754   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:05.006635   24407 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa...
	I1007 10:48:05.394400   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:05.394253   24407 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/ha-406505-m03.rawdisk...
	I1007 10:48:05.394429   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Writing magic tar header
	I1007 10:48:05.394439   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Writing SSH key tar header
	I1007 10:48:05.394459   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:05.394362   24407 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03 ...
	I1007 10:48:05.394475   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03
	I1007 10:48:05.394502   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:48:05.394516   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03 (perms=drwx------)
	I1007 10:48:05.394522   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:48:05.394535   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:48:05.394541   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:48:05.394550   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:48:05.394560   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:48:05.394571   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:48:05.394584   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:48:05.394597   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:48:05.394606   23621 main.go:141] libmachine: (ha-406505-m03) Creating domain...
	I1007 10:48:05.394611   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:48:05.394619   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home
	I1007 10:48:05.394623   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Skipping /home - not owner
	I1007 10:48:05.395724   23621 main.go:141] libmachine: (ha-406505-m03) define libvirt domain using xml: 
	I1007 10:48:05.395761   23621 main.go:141] libmachine: (ha-406505-m03) <domain type='kvm'>
	I1007 10:48:05.395773   23621 main.go:141] libmachine: (ha-406505-m03)   <name>ha-406505-m03</name>
	I1007 10:48:05.395784   23621 main.go:141] libmachine: (ha-406505-m03)   <memory unit='MiB'>2200</memory>
	I1007 10:48:05.395793   23621 main.go:141] libmachine: (ha-406505-m03)   <vcpu>2</vcpu>
	I1007 10:48:05.395802   23621 main.go:141] libmachine: (ha-406505-m03)   <features>
	I1007 10:48:05.395809   23621 main.go:141] libmachine: (ha-406505-m03)     <acpi/>
	I1007 10:48:05.395818   23621 main.go:141] libmachine: (ha-406505-m03)     <apic/>
	I1007 10:48:05.395827   23621 main.go:141] libmachine: (ha-406505-m03)     <pae/>
	I1007 10:48:05.395836   23621 main.go:141] libmachine: (ha-406505-m03)     
	I1007 10:48:05.395844   23621 main.go:141] libmachine: (ha-406505-m03)   </features>
	I1007 10:48:05.395854   23621 main.go:141] libmachine: (ha-406505-m03)   <cpu mode='host-passthrough'>
	I1007 10:48:05.395884   23621 main.go:141] libmachine: (ha-406505-m03)   
	I1007 10:48:05.395909   23621 main.go:141] libmachine: (ha-406505-m03)   </cpu>
	I1007 10:48:05.395940   23621 main.go:141] libmachine: (ha-406505-m03)   <os>
	I1007 10:48:05.395963   23621 main.go:141] libmachine: (ha-406505-m03)     <type>hvm</type>
	I1007 10:48:05.395977   23621 main.go:141] libmachine: (ha-406505-m03)     <boot dev='cdrom'/>
	I1007 10:48:05.396000   23621 main.go:141] libmachine: (ha-406505-m03)     <boot dev='hd'/>
	I1007 10:48:05.396019   23621 main.go:141] libmachine: (ha-406505-m03)     <bootmenu enable='no'/>
	I1007 10:48:05.396035   23621 main.go:141] libmachine: (ha-406505-m03)   </os>
	I1007 10:48:05.396063   23621 main.go:141] libmachine: (ha-406505-m03)   <devices>
	I1007 10:48:05.396094   23621 main.go:141] libmachine: (ha-406505-m03)     <disk type='file' device='cdrom'>
	I1007 10:48:05.396113   23621 main.go:141] libmachine: (ha-406505-m03)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/boot2docker.iso'/>
	I1007 10:48:05.396125   23621 main.go:141] libmachine: (ha-406505-m03)       <target dev='hdc' bus='scsi'/>
	I1007 10:48:05.396137   23621 main.go:141] libmachine: (ha-406505-m03)       <readonly/>
	I1007 10:48:05.396147   23621 main.go:141] libmachine: (ha-406505-m03)     </disk>
	I1007 10:48:05.396159   23621 main.go:141] libmachine: (ha-406505-m03)     <disk type='file' device='disk'>
	I1007 10:48:05.396176   23621 main.go:141] libmachine: (ha-406505-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:48:05.396192   23621 main.go:141] libmachine: (ha-406505-m03)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/ha-406505-m03.rawdisk'/>
	I1007 10:48:05.396207   23621 main.go:141] libmachine: (ha-406505-m03)       <target dev='hda' bus='virtio'/>
	I1007 10:48:05.396219   23621 main.go:141] libmachine: (ha-406505-m03)     </disk>
	I1007 10:48:05.396231   23621 main.go:141] libmachine: (ha-406505-m03)     <interface type='network'>
	I1007 10:48:05.396243   23621 main.go:141] libmachine: (ha-406505-m03)       <source network='mk-ha-406505'/>
	I1007 10:48:05.396258   23621 main.go:141] libmachine: (ha-406505-m03)       <model type='virtio'/>
	I1007 10:48:05.396270   23621 main.go:141] libmachine: (ha-406505-m03)     </interface>
	I1007 10:48:05.396280   23621 main.go:141] libmachine: (ha-406505-m03)     <interface type='network'>
	I1007 10:48:05.396290   23621 main.go:141] libmachine: (ha-406505-m03)       <source network='default'/>
	I1007 10:48:05.396300   23621 main.go:141] libmachine: (ha-406505-m03)       <model type='virtio'/>
	I1007 10:48:05.396309   23621 main.go:141] libmachine: (ha-406505-m03)     </interface>
	I1007 10:48:05.396320   23621 main.go:141] libmachine: (ha-406505-m03)     <serial type='pty'>
	I1007 10:48:05.396337   23621 main.go:141] libmachine: (ha-406505-m03)       <target port='0'/>
	I1007 10:48:05.396351   23621 main.go:141] libmachine: (ha-406505-m03)     </serial>
	I1007 10:48:05.396362   23621 main.go:141] libmachine: (ha-406505-m03)     <console type='pty'>
	I1007 10:48:05.396372   23621 main.go:141] libmachine: (ha-406505-m03)       <target type='serial' port='0'/>
	I1007 10:48:05.396382   23621 main.go:141] libmachine: (ha-406505-m03)     </console>
	I1007 10:48:05.396391   23621 main.go:141] libmachine: (ha-406505-m03)     <rng model='virtio'>
	I1007 10:48:05.396401   23621 main.go:141] libmachine: (ha-406505-m03)       <backend model='random'>/dev/random</backend>
	I1007 10:48:05.396411   23621 main.go:141] libmachine: (ha-406505-m03)     </rng>
	I1007 10:48:05.396418   23621 main.go:141] libmachine: (ha-406505-m03)     
	I1007 10:48:05.396427   23621 main.go:141] libmachine: (ha-406505-m03)     
	I1007 10:48:05.396436   23621 main.go:141] libmachine: (ha-406505-m03)   </devices>
	I1007 10:48:05.396454   23621 main.go:141] libmachine: (ha-406505-m03) </domain>
	I1007 10:48:05.396464   23621 main.go:141] libmachine: (ha-406505-m03) 
	I1007 10:48:05.403522   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:af:df:35 in network default
	I1007 10:48:05.404128   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:05.404146   23621 main.go:141] libmachine: (ha-406505-m03) Ensuring networks are active...
	I1007 10:48:05.404936   23621 main.go:141] libmachine: (ha-406505-m03) Ensuring network default is active
	I1007 10:48:05.405208   23621 main.go:141] libmachine: (ha-406505-m03) Ensuring network mk-ha-406505 is active
	I1007 10:48:05.405622   23621 main.go:141] libmachine: (ha-406505-m03) Getting domain xml...
	I1007 10:48:05.406377   23621 main.go:141] libmachine: (ha-406505-m03) Creating domain...
	I1007 10:48:06.663273   23621 main.go:141] libmachine: (ha-406505-m03) Waiting to get IP...
	I1007 10:48:06.664152   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:06.664559   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:06.664583   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:06.664538   24407 retry.go:31] will retry after 215.584214ms: waiting for machine to come up
	I1007 10:48:06.882094   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:06.882713   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:06.882744   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:06.882654   24407 retry.go:31] will retry after 346.060218ms: waiting for machine to come up
	I1007 10:48:07.229850   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:07.230332   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:07.230440   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:07.230280   24407 retry.go:31] will retry after 442.798208ms: waiting for machine to come up
	I1007 10:48:07.675076   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:07.675596   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:07.675620   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:07.675547   24407 retry.go:31] will retry after 562.649906ms: waiting for machine to come up
	I1007 10:48:08.240324   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:08.240767   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:08.240800   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:08.240736   24407 retry.go:31] will retry after 482.878877ms: waiting for machine to come up
	I1007 10:48:08.725445   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:08.725807   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:08.725869   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:08.725755   24407 retry.go:31] will retry after 616.205186ms: waiting for machine to come up
	I1007 10:48:09.343485   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:09.343941   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:09.344003   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:09.343909   24407 retry.go:31] will retry after 1.040138153s: waiting for machine to come up
	I1007 10:48:10.386253   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:10.386682   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:10.386713   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:10.386637   24407 retry.go:31] will retry after 1.418753496s: waiting for machine to come up
	I1007 10:48:11.807040   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:11.807484   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:11.807521   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:11.807425   24407 retry.go:31] will retry after 1.535016663s: waiting for machine to come up
	I1007 10:48:13.343720   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:13.344267   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:13.344302   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:13.344197   24407 retry.go:31] will retry after 1.769880509s: waiting for machine to come up
	I1007 10:48:15.115316   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:15.115817   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:15.115850   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:15.115759   24407 retry.go:31] will retry after 2.49899664s: waiting for machine to come up
	I1007 10:48:17.617100   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:17.617680   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:17.617710   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:17.617615   24407 retry.go:31] will retry after 2.794854441s: waiting for machine to come up
	I1007 10:48:20.413842   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:20.414235   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:20.414299   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:20.414227   24407 retry.go:31] will retry after 2.870258619s: waiting for machine to come up
	I1007 10:48:23.285865   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:23.286247   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:23.286273   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:23.286205   24407 retry.go:31] will retry after 5.059515205s: waiting for machine to come up
	I1007 10:48:28.350184   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:28.350662   23621 main.go:141] libmachine: (ha-406505-m03) Found IP for machine: 192.168.39.102
	I1007 10:48:28.350688   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has current primary IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:28.350700   23621 main.go:141] libmachine: (ha-406505-m03) Reserving static IP address...
	I1007 10:48:28.351065   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find host DHCP lease matching {name: "ha-406505-m03", mac: "52:54:00:7e:e4:e0", ip: "192.168.39.102"} in network mk-ha-406505
	I1007 10:48:28.431618   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Getting to WaitForSSH function...
	I1007 10:48:28.431646   23621 main.go:141] libmachine: (ha-406505-m03) Reserved static IP address: 192.168.39.102
	I1007 10:48:28.431659   23621 main.go:141] libmachine: (ha-406505-m03) Waiting for SSH to be available...
	I1007 10:48:28.434458   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:28.434796   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505
	I1007 10:48:28.434824   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find defined IP address of network mk-ha-406505 interface with MAC address 52:54:00:7e:e4:e0
	I1007 10:48:28.434975   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH client type: external
	I1007 10:48:28.435007   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa (-rw-------)
	I1007 10:48:28.435035   23621 main.go:141] libmachine: (ha-406505-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:48:28.435054   23621 main.go:141] libmachine: (ha-406505-m03) DBG | About to run SSH command:
	I1007 10:48:28.435085   23621 main.go:141] libmachine: (ha-406505-m03) DBG | exit 0
	I1007 10:48:28.439710   23621 main.go:141] libmachine: (ha-406505-m03) DBG | SSH cmd err, output: exit status 255: 
	I1007 10:48:28.439737   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 10:48:28.439768   23621 main.go:141] libmachine: (ha-406505-m03) DBG | command : exit 0
	I1007 10:48:28.439798   23621 main.go:141] libmachine: (ha-406505-m03) DBG | err     : exit status 255
	I1007 10:48:28.439811   23621 main.go:141] libmachine: (ha-406505-m03) DBG | output  : 
	I1007 10:48:31.440230   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Getting to WaitForSSH function...
	I1007 10:48:31.442839   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.443280   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.443311   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.443446   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH client type: external
	I1007 10:48:31.443482   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa (-rw-------)
	I1007 10:48:31.443520   23621 main.go:141] libmachine: (ha-406505-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:48:31.443544   23621 main.go:141] libmachine: (ha-406505-m03) DBG | About to run SSH command:
	I1007 10:48:31.443556   23621 main.go:141] libmachine: (ha-406505-m03) DBG | exit 0
	I1007 10:48:31.568683   23621 main.go:141] libmachine: (ha-406505-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 10:48:31.568948   23621 main.go:141] libmachine: (ha-406505-m03) KVM machine creation complete!
	I1007 10:48:31.569279   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetConfigRaw
	I1007 10:48:31.569953   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:31.570177   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:31.570345   23621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:48:31.570360   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetState
	I1007 10:48:31.571674   23621 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:48:31.571686   23621 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:48:31.571691   23621 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:48:31.571696   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.574360   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.574751   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.574773   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.574972   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.575161   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.575318   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.575453   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.575630   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.575886   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.575901   23621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:48:31.679615   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:48:31.679639   23621 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:48:31.679646   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.682574   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.682919   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.682944   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.683116   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.683308   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.683480   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.683605   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.683787   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.683977   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.684002   23621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:48:31.789204   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:48:31.789302   23621 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:48:31.789319   23621 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:48:31.789332   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:31.789607   23621 buildroot.go:166] provisioning hostname "ha-406505-m03"
	I1007 10:48:31.789633   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:31.789836   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.792541   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.792898   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.792925   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.793077   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.793430   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.793697   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.793864   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.794038   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.794203   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.794220   23621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505-m03 && echo "ha-406505-m03" | sudo tee /etc/hostname
	I1007 10:48:31.915086   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505-m03
	
	I1007 10:48:31.915117   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.918064   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.918448   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.918486   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.918647   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.918833   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.918992   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.919119   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.919284   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.919488   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.919532   23621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:48:32.033622   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:48:32.033656   23621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:48:32.033671   23621 buildroot.go:174] setting up certificates
	I1007 10:48:32.033679   23621 provision.go:84] configureAuth start
	I1007 10:48:32.033688   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:32.034012   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:32.037059   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.037482   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.037516   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.037674   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.040020   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.040373   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.040394   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.040541   23621 provision.go:143] copyHostCerts
	I1007 10:48:32.040567   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:48:32.040595   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:48:32.040603   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:48:32.040668   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:48:32.040738   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:48:32.040754   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:48:32.040761   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:48:32.040784   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:48:32.040824   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:48:32.040840   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:48:32.040846   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:48:32.040866   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:48:32.040911   23621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505-m03 san=[127.0.0.1 192.168.39.102 ha-406505-m03 localhost minikube]
	I1007 10:48:32.221278   23621 provision.go:177] copyRemoteCerts
	I1007 10:48:32.221329   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:48:32.221355   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.224264   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.224745   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.224771   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.224993   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.225158   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.225327   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.225465   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:32.308320   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:48:32.308394   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:48:32.337349   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:48:32.337427   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 10:48:32.362724   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:48:32.362808   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:48:32.388055   23621 provision.go:87] duration metric: took 354.362269ms to configureAuth
	I1007 10:48:32.388097   23621 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:48:32.388337   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:48:32.388417   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.391464   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.391888   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.391916   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.392130   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.392314   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.392419   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.392546   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.392731   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:32.392934   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:32.392957   23621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:48:32.625746   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:48:32.625778   23621 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:48:32.625788   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetURL
	I1007 10:48:32.627033   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using libvirt version 6000000
	I1007 10:48:32.629153   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.629483   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.629535   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.629659   23621 main.go:141] libmachine: Docker is up and running!
	I1007 10:48:32.629673   23621 main.go:141] libmachine: Reticulating splines...
	I1007 10:48:32.629679   23621 client.go:171] duration metric: took 27.87385173s to LocalClient.Create
	I1007 10:48:32.629697   23621 start.go:167] duration metric: took 27.873912748s to libmachine.API.Create "ha-406505"
	I1007 10:48:32.629707   23621 start.go:293] postStartSetup for "ha-406505-m03" (driver="kvm2")
	I1007 10:48:32.629716   23621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:48:32.629732   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.629961   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:48:32.629987   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.632229   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.632615   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.632638   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.632778   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.632953   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.633107   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.633255   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:32.719017   23621 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:48:32.723755   23621 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:48:32.723780   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:48:32.723839   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:48:32.723945   23621 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:48:32.723957   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:48:32.724071   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:48:32.734023   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:48:32.759071   23621 start.go:296] duration metric: took 129.349571ms for postStartSetup
	I1007 10:48:32.759128   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetConfigRaw
	I1007 10:48:32.759727   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:32.762372   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.762794   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.762825   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.763105   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:48:32.763346   23621 start.go:128] duration metric: took 28.026823197s to createHost
	I1007 10:48:32.763370   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.765734   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.766060   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.766091   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.766305   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.766467   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.766612   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.766764   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.766903   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:32.767070   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:32.767079   23621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:48:32.873753   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298112.851911112
	
	I1007 10:48:32.873779   23621 fix.go:216] guest clock: 1728298112.851911112
	I1007 10:48:32.873789   23621 fix.go:229] Guest: 2024-10-07 10:48:32.851911112 +0000 UTC Remote: 2024-10-07 10:48:32.763358943 +0000 UTC m=+152.116498435 (delta=88.552169ms)
	I1007 10:48:32.873808   23621 fix.go:200] guest clock delta is within tolerance: 88.552169ms
	I1007 10:48:32.873815   23621 start.go:83] releasing machines lock for "ha-406505-m03", held for 28.137449792s
	I1007 10:48:32.873834   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.874113   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:32.877249   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.877618   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.877659   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.879531   23621 out.go:177] * Found network options:
	I1007 10:48:32.880848   23621 out.go:177]   - NO_PROXY=192.168.39.250,192.168.39.37
	W1007 10:48:32.882090   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 10:48:32.882109   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:48:32.882124   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.882710   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.882882   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.882980   23621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:48:32.883020   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	W1007 10:48:32.883028   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 10:48:32.883048   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:48:32.883114   23621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:48:32.883136   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.885892   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886191   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886254   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.886279   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886434   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.886593   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.886690   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.886721   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886723   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.886891   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.886927   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:32.887008   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.887172   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.887336   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:33.125827   23621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:48:33.132836   23621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:48:33.132914   23621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:48:33.152264   23621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:48:33.152289   23621 start.go:495] detecting cgroup driver to use...
	I1007 10:48:33.152363   23621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:48:33.172642   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:48:33.190770   23621 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:48:33.190848   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:48:33.206401   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:48:33.222941   23621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:48:33.363133   23621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:48:33.526409   23621 docker.go:233] disabling docker service ...
	I1007 10:48:33.526475   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:48:33.542837   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:48:33.557673   23621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:48:33.715377   23621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:48:33.847470   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:48:33.862560   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:48:33.884061   23621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:48:33.884116   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.897298   23621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:48:33.897363   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.909096   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.921064   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.932787   23621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:48:33.944724   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.956149   23621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.976708   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.988978   23621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:48:33.999874   23621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:48:33.999940   23621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:48:34.015557   23621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:48:34.026499   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:48:34.149992   23621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:48:34.251227   23621 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:48:34.251293   23621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:48:34.256863   23621 start.go:563] Will wait 60s for crictl version
	I1007 10:48:34.256915   23621 ssh_runner.go:195] Run: which crictl
	I1007 10:48:34.260970   23621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:48:34.301659   23621 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:48:34.301747   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:48:34.332633   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:48:34.367466   23621 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:48:34.369001   23621 out.go:177]   - env NO_PROXY=192.168.39.250
	I1007 10:48:34.370423   23621 out.go:177]   - env NO_PROXY=192.168.39.250,192.168.39.37
	I1007 10:48:34.371711   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:34.374438   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:34.374867   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:34.374897   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:34.375117   23621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:48:34.379896   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:48:34.393502   23621 mustload.go:65] Loading cluster: ha-406505
	I1007 10:48:34.393757   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:48:34.394025   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:34.394061   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:34.411296   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38513
	I1007 10:48:34.411826   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:34.412384   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:34.412408   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:34.412720   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:34.412914   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:48:34.414711   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:48:34.415007   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:34.415055   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:34.431721   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34665
	I1007 10:48:34.432227   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:34.432721   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:34.432743   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:34.433085   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:34.433286   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:48:34.433443   23621 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.102
	I1007 10:48:34.433455   23621 certs.go:194] generating shared ca certs ...
	I1007 10:48:34.433473   23621 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:48:34.433653   23621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:48:34.433694   23621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:48:34.433704   23621 certs.go:256] generating profile certs ...
	I1007 10:48:34.433769   23621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:48:34.433796   23621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af
	I1007 10:48:34.433810   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.102 192.168.39.254]
	I1007 10:48:34.626802   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af ...
	I1007 10:48:34.626838   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af: {Name:mk4dc5899bb034b35a02970b97ee9a5705168f50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:48:34.627028   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af ...
	I1007 10:48:34.627045   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af: {Name:mk33cc429fb28f1dd32077e7c6736b9265eee4dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:48:34.627160   23621 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:48:34.627332   23621 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:48:34.627505   23621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:48:34.627523   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:48:34.627547   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:48:34.627570   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:48:34.627588   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:48:34.627606   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:48:34.627624   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:48:34.627650   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:48:34.648122   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:48:34.648245   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:48:34.648300   23621 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:48:34.648313   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:48:34.648345   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:48:34.648376   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:48:34.648424   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:48:34.649013   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:48:34.649072   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:48:34.649091   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:34.649106   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:48:34.649154   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:48:34.652851   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:34.653287   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:48:34.653319   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:34.653480   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:48:34.653695   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:48:34.653872   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:48:34.653998   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:48:34.732255   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 10:48:34.739182   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 10:48:34.751245   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 10:48:34.755732   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 10:48:34.766849   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 10:48:34.771581   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 10:48:34.783409   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 10:48:34.788150   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1007 10:48:34.799354   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 10:48:34.804283   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 10:48:34.816354   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 10:48:34.821135   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 10:48:34.834977   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:48:34.863883   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:48:34.896166   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:48:34.926479   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:48:34.954664   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 10:48:34.981371   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 10:48:35.009381   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:48:35.036950   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:48:35.063824   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:48:35.091476   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:48:35.119954   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:48:35.148052   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 10:48:35.166363   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 10:48:35.186175   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 10:48:35.205554   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1007 10:48:35.223002   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 10:48:35.240092   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 10:48:35.256797   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 10:48:35.274939   23621 ssh_runner.go:195] Run: openssl version
	I1007 10:48:35.281362   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:48:35.293636   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:48:35.298579   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:48:35.298639   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:48:35.304753   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:48:35.315888   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:48:35.326832   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:35.331554   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:35.331619   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:35.337434   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:48:35.348665   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:48:35.360023   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:48:35.365259   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:48:35.365338   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:48:35.372821   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:48:35.385592   23621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:48:35.390405   23621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:48:35.390455   23621 kubeadm.go:934] updating node {m03 192.168.39.102 8443 v1.31.1 crio true true} ...
	I1007 10:48:35.390529   23621 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:48:35.390554   23621 kube-vip.go:115] generating kube-vip config ...
	I1007 10:48:35.390588   23621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:48:35.407020   23621 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:48:35.407098   23621 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:48:35.407155   23621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:48:35.417610   23621 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 10:48:35.417677   23621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 10:48:35.428405   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 10:48:35.428437   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:48:35.428436   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1007 10:48:35.428474   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1007 10:48:35.428487   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:48:35.428508   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:48:35.428547   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:48:35.428511   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:48:35.446473   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 10:48:35.446517   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 10:48:35.446544   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 10:48:35.446546   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:48:35.446583   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 10:48:35.446648   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:48:35.470883   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 10:48:35.470927   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 10:48:36.357285   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 10:48:36.367780   23621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 10:48:36.389088   23621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:48:36.406417   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:48:36.424782   23621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:48:36.429051   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:48:36.442669   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:48:36.586820   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:48:36.605650   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:48:36.606095   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:36.606145   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:36.622824   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45931
	I1007 10:48:36.623406   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:36.623956   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:36.624010   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:36.624375   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:36.624602   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:48:36.624756   23621 start.go:317] joinCluster: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:48:36.624906   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 10:48:36.624922   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:48:36.628085   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:36.628498   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:48:36.628533   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:36.628663   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:48:36.628842   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:48:36.628992   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:48:36.629138   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:48:36.794813   23621 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:48:36.794869   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gpv0xr.ao0m8qerz0fls7pl --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m03 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443"
	I1007 10:48:59.856325   23621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gpv0xr.ao0m8qerz0fls7pl --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m03 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443": (23.06138473s)
	I1007 10:48:59.856362   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 10:49:00.490810   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406505-m03 minikube.k8s.io/updated_at=2024_10_07T10_49_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=ha-406505 minikube.k8s.io/primary=false
	I1007 10:49:00.615125   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-406505-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 10:49:00.740706   23621 start.go:319] duration metric: took 24.115945375s to joinCluster
	I1007 10:49:00.740808   23621 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:49:00.741314   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:49:00.742651   23621 out.go:177] * Verifying Kubernetes components...
	I1007 10:49:00.744087   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:49:00.980117   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:49:00.996987   23621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:49:00.997383   23621 kapi.go:59] client config for ha-406505: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key", CAFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 10:49:00.997456   23621 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.250:8443
	I1007 10:49:00.997848   23621 node_ready.go:35] waiting up to 6m0s for node "ha-406505-m03" to be "Ready" ...
	I1007 10:49:00.997952   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:00.997963   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:00.997973   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:00.997980   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:01.002879   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:01.498022   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:01.498047   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:01.498058   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:01.498063   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:01.502144   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:01.998532   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:01.998559   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:01.998571   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:01.998580   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:02.002214   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:02.498080   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:02.498113   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:02.498126   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:02.498132   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:02.502433   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:02.998449   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:02.998474   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:02.998482   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:02.998486   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:03.001753   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:03.002481   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:03.498693   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:03.498717   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:03.498727   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:03.498732   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:03.503726   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:03.998977   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:03.999008   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:03.999019   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:03.999026   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:04.002356   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:04.498338   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:04.498365   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:04.498374   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:04.498379   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:04.502295   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:04.998619   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:04.998645   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:04.998656   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:04.998660   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:05.001641   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:05.498634   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:05.498660   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:05.498671   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:05.498677   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:05.502156   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:05.502885   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:05.998723   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:05.998794   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:05.998812   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:05.998818   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:06.003873   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:49:06.499098   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:06.499119   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:06.499126   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:06.499131   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:06.503089   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:06.998553   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:06.998587   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:06.998595   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:06.998599   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:07.002580   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:07.498710   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:07.498736   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:07.498746   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:07.498751   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:07.502124   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:07.502967   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:07.998236   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:07.998258   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:07.998267   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:07.998271   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:08.001970   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:08.498896   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:08.498918   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:08.498927   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:08.498931   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:08.502697   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:08.998532   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:08.998561   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:08.998571   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:08.998578   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:09.002002   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:09.498039   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:09.498064   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:09.498077   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:09.498084   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:09.502005   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:09.998852   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:09.998879   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:09.998887   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:09.998893   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:10.002735   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:10.003524   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:10.499000   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:10.499026   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:10.499034   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:10.499046   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:10.502792   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:10.998624   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:10.998647   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:10.998659   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:10.998663   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:11.002342   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:11.498150   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:11.498177   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:11.498186   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:11.498193   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:11.502277   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:11.998714   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:11.998735   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:11.998743   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:11.998748   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:12.002263   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:12.498755   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:12.498782   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:12.498794   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:12.498801   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:12.502981   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:12.503718   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:12.999042   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:12.999069   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:12.999079   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:12.999085   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:13.002464   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:13.498077   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:13.498101   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:13.498110   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:13.498115   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:13.501652   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:13.998309   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:13.998332   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:13.998343   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:13.998347   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:14.001704   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:14.498713   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:14.498734   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:14.498742   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:14.498745   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:14.502719   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:14.999025   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:14.999047   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:14.999055   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:14.999059   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:15.002812   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:15.003362   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:15.498817   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:15.498839   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:15.498846   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:15.498850   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:15.504009   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:49:15.998456   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:15.998477   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:15.998485   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:15.998488   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:16.001780   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:16.498830   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:16.498857   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:16.498868   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:16.498873   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:16.502631   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:16.998224   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:16.998257   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:16.998268   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:16.998274   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:17.001615   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:17.498645   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:17.498672   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:17.498684   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:17.498688   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:17.502201   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:17.502837   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:17.998189   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:17.998213   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:17.998220   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:17.998226   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.001816   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.498415   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:18.498450   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.498462   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.498469   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.502015   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.502523   23621 node_ready.go:49] node "ha-406505-m03" has status "Ready":"True"
	I1007 10:49:18.502543   23621 node_ready.go:38] duration metric: took 17.504667395s for node "ha-406505-m03" to be "Ready" ...
	I1007 10:49:18.502551   23621 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:49:18.502632   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:18.502642   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.502650   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.502656   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.509327   23621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 10:49:18.518372   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.518459   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghmwd
	I1007 10:49:18.518464   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.518472   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.518479   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.521616   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.522356   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:18.522371   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.522378   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.522382   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.524976   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.525512   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.525532   23621 pod_ready.go:82] duration metric: took 7.133708ms for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.525541   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.525593   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xzc88
	I1007 10:49:18.525602   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.525608   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.525612   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.528321   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.529035   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:18.529049   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.529055   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.529058   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.531646   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.532124   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.532141   23621 pod_ready.go:82] duration metric: took 6.593928ms for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.532153   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.532225   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505
	I1007 10:49:18.532234   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.532244   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.532249   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.534614   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.535248   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:18.535264   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.535274   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.535279   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.537970   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.538368   23621 pod_ready.go:93] pod "etcd-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.538387   23621 pod_ready.go:82] duration metric: took 6.225816ms for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.538401   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.538461   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m02
	I1007 10:49:18.538472   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.538483   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.538491   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.541748   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.542359   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:18.542377   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.542389   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.542397   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.545668   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.546089   23621 pod_ready.go:93] pod "etcd-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.546104   23621 pod_ready.go:82] duration metric: took 7.695818ms for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.546113   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.698417   23621 request.go:632] Waited for 152.247174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m03
	I1007 10:49:18.698479   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m03
	I1007 10:49:18.698485   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.698492   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.698497   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.702261   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.899482   23621 request.go:632] Waited for 196.389358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:18.899569   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:18.899582   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.899593   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.899603   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.903728   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:18.904256   23621 pod_ready.go:93] pod "etcd-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.904275   23621 pod_ready.go:82] duration metric: took 358.156028ms for pod "etcd-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.904291   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.099454   23621 request.go:632] Waited for 195.101714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:49:19.099547   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:49:19.099559   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.099569   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.099575   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.103611   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:19.298735   23621 request.go:632] Waited for 194.375211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:19.298818   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:19.298825   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.298837   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.298856   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.302548   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:19.303053   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:19.303069   23621 pod_ready.go:82] duration metric: took 398.772541ms for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.303079   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.499176   23621 request.go:632] Waited for 196.018641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:49:19.499270   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:49:19.499283   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.499296   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.499309   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.503085   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:19.699374   23621 request.go:632] Waited for 195.380837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:19.699426   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:19.699432   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.699439   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.699443   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.703099   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:19.703625   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:19.703644   23621 pod_ready.go:82] duration metric: took 400.557163ms for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.703654   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.899212   23621 request.go:632] Waited for 195.494385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m03
	I1007 10:49:19.899266   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m03
	I1007 10:49:19.899271   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.899283   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.899289   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.902896   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.098927   23621 request.go:632] Waited for 195.376619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:20.098987   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:20.098993   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.099000   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.099004   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.102179   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.102740   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:20.102763   23621 pod_ready.go:82] duration metric: took 399.102679ms for pod "kube-apiserver-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.102773   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.298944   23621 request.go:632] Waited for 196.089064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:49:20.299004   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:49:20.299010   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.299017   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.299023   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.302867   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.498409   23621 request.go:632] Waited for 194.294244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:20.498569   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:20.498582   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.498592   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.498599   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.502204   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.503003   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:20.503027   23621 pod_ready.go:82] duration metric: took 400.247835ms for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.503037   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.699318   23621 request.go:632] Waited for 196.218592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:49:20.699394   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:49:20.699405   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.699415   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.699424   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.702950   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.899287   23621 request.go:632] Waited for 195.402635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:20.899343   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:20.899349   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.899370   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.899375   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.903339   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.904141   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:20.904160   23621 pod_ready.go:82] duration metric: took 401.116067ms for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.904170   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.099320   23621 request.go:632] Waited for 195.054621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m03
	I1007 10:49:21.099383   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m03
	I1007 10:49:21.099391   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.099404   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.099415   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.103012   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.299153   23621 request.go:632] Waited for 195.377964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:21.299213   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:21.299218   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.299225   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.299229   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.303015   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.303516   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:21.303534   23621 pod_ready.go:82] duration metric: took 399.355676ms for pod "kube-controller-manager-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.303543   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.498530   23621 request.go:632] Waited for 194.920994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:49:21.498597   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:49:21.498603   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.498610   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.498614   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.502242   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.699351   23621 request.go:632] Waited for 196.362706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:21.699418   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:21.699423   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.699431   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.699435   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.702722   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.703412   23621 pod_ready.go:93] pod "kube-proxy-6ng4z" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:21.703429   23621 pod_ready.go:82] duration metric: took 399.878679ms for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.703439   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c79zf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.898495   23621 request.go:632] Waited for 195.001064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c79zf
	I1007 10:49:21.898570   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c79zf
	I1007 10:49:21.898576   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.898583   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.898587   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.903113   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:22.099311   23621 request.go:632] Waited for 195.352243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:22.099376   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:22.099384   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.099392   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.099397   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.102668   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.103269   23621 pod_ready.go:93] pod "kube-proxy-c79zf" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:22.103284   23621 pod_ready.go:82] duration metric: took 399.838704ms for pod "kube-proxy-c79zf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.103298   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.299438   23621 request.go:632] Waited for 196.048125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:49:22.299517   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:49:22.299528   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.299539   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.299548   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.303349   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.499362   23621 request.go:632] Waited for 195.369323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.499426   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.499434   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.499445   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.499452   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.503812   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:22.504569   23621 pod_ready.go:93] pod "kube-proxy-nlnhf" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:22.504595   23621 pod_ready.go:82] duration metric: took 401.287955ms for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.504608   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.698460   23621 request.go:632] Waited for 193.785531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:49:22.698548   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:49:22.698557   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.698568   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.698578   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.702017   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.898981   23621 request.go:632] Waited for 196.377795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.899067   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.899078   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.899089   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.899095   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.902303   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.903166   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:22.903182   23621 pod_ready.go:82] duration metric: took 398.566323ms for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.903191   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.099385   23621 request.go:632] Waited for 196.133679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:49:23.099448   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:49:23.099455   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.099466   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.099472   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.102786   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.298901   23621 request.go:632] Waited for 195.266193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:23.298979   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:23.299002   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.299017   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.299025   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.302232   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.302790   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:23.302809   23621 pod_ready.go:82] duration metric: took 399.610952ms for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.302821   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.499180   23621 request.go:632] Waited for 196.292359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m03
	I1007 10:49:23.499272   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m03
	I1007 10:49:23.499287   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.499297   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.499301   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.502869   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.699193   23621 request.go:632] Waited for 195.355503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:23.699258   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:23.699265   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.699273   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.699279   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.703084   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.703667   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:23.703685   23621 pod_ready.go:82] duration metric: took 400.856999ms for pod "kube-scheduler-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.703698   23621 pod_ready.go:39] duration metric: took 5.201137337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:49:23.703714   23621 api_server.go:52] waiting for apiserver process to appear ...
	I1007 10:49:23.703771   23621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 10:49:23.720988   23621 api_server.go:72] duration metric: took 22.980139715s to wait for apiserver process to appear ...
	I1007 10:49:23.721017   23621 api_server.go:88] waiting for apiserver healthz status ...
	I1007 10:49:23.721038   23621 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I1007 10:49:23.727765   23621 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I1007 10:49:23.727841   23621 round_trippers.go:463] GET https://192.168.39.250:8443/version
	I1007 10:49:23.727846   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.727855   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.727860   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.728928   23621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1007 10:49:23.729002   23621 api_server.go:141] control plane version: v1.31.1
	I1007 10:49:23.729019   23621 api_server.go:131] duration metric: took 7.995236ms to wait for apiserver health ...
	I1007 10:49:23.729029   23621 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 10:49:23.899405   23621 request.go:632] Waited for 170.304588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:23.899474   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:23.899479   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.899494   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.899501   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.905647   23621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 10:49:23.912018   23621 system_pods.go:59] 24 kube-system pods found
	I1007 10:49:23.912046   23621 system_pods.go:61] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:49:23.912051   23621 system_pods.go:61] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:49:23.912055   23621 system_pods.go:61] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:49:23.912059   23621 system_pods.go:61] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:49:23.912064   23621 system_pods.go:61] "etcd-ha-406505-m03" [2c0079fb-51f1-423c-8b4c-893824342cd6] Running
	I1007 10:49:23.912069   23621 system_pods.go:61] "kindnet-28vpp" [c14e8bdf-ebc5-4349-adb4-6786cd15551d] Running
	I1007 10:49:23.912074   23621 system_pods.go:61] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:49:23.912079   23621 system_pods.go:61] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:49:23.912087   23621 system_pods.go:61] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:49:23.912092   23621 system_pods.go:61] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:49:23.912101   23621 system_pods.go:61] "kube-apiserver-ha-406505-m03" [8bc80684-cd9a-40b1-94e1-02cb77917c36] Running
	I1007 10:49:23.912106   23621 system_pods.go:61] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:49:23.912111   23621 system_pods.go:61] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:49:23.912116   23621 system_pods.go:61] "kube-controller-manager-ha-406505-m03" [ab97ec1a-fb7e-42a5-b77c-721ccf85db1d] Running
	I1007 10:49:23.912120   23621 system_pods.go:61] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:49:23.912123   23621 system_pods.go:61] "kube-proxy-c79zf" [2b12aaa5-9560-459b-a3bb-e45e73a6b663] Running
	I1007 10:49:23.912129   23621 system_pods.go:61] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:49:23.912132   23621 system_pods.go:61] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:49:23.912135   23621 system_pods.go:61] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:49:23.912139   23621 system_pods.go:61] "kube-scheduler-ha-406505-m03" [da8d486f-250a-4961-ac7c-b1435c52a3ca] Running
	I1007 10:49:23.912147   23621 system_pods.go:61] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:49:23.912152   23621 system_pods.go:61] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:49:23.912155   23621 system_pods.go:61] "kube-vip-ha-406505-m03" [a90a6084-73a3-476c-9729-1d8b45c6f3fc] Running
	I1007 10:49:23.912160   23621 system_pods.go:61] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:49:23.912167   23621 system_pods.go:74] duration metric: took 183.129229ms to wait for pod list to return data ...
	I1007 10:49:23.912178   23621 default_sa.go:34] waiting for default service account to be created ...
	I1007 10:49:24.099457   23621 request.go:632] Waited for 187.192356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:49:24.099519   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:49:24.099524   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:24.099532   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:24.099538   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:24.104028   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:24.104180   23621 default_sa.go:45] found service account: "default"
	I1007 10:49:24.104202   23621 default_sa.go:55] duration metric: took 192.014074ms for default service account to be created ...
	I1007 10:49:24.104214   23621 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 10:49:24.299461   23621 request.go:632] Waited for 195.156179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:24.299513   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:24.299518   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:24.299525   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:24.299530   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:24.305308   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:49:24.311531   23621 system_pods.go:86] 24 kube-system pods found
	I1007 10:49:24.311559   23621 system_pods.go:89] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:49:24.311565   23621 system_pods.go:89] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:49:24.311569   23621 system_pods.go:89] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:49:24.311575   23621 system_pods.go:89] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:49:24.311579   23621 system_pods.go:89] "etcd-ha-406505-m03" [2c0079fb-51f1-423c-8b4c-893824342cd6] Running
	I1007 10:49:24.311583   23621 system_pods.go:89] "kindnet-28vpp" [c14e8bdf-ebc5-4349-adb4-6786cd15551d] Running
	I1007 10:49:24.311589   23621 system_pods.go:89] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:49:24.311593   23621 system_pods.go:89] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:49:24.311599   23621 system_pods.go:89] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:49:24.311602   23621 system_pods.go:89] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:49:24.311606   23621 system_pods.go:89] "kube-apiserver-ha-406505-m03" [8bc80684-cd9a-40b1-94e1-02cb77917c36] Running
	I1007 10:49:24.311611   23621 system_pods.go:89] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:49:24.311617   23621 system_pods.go:89] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:49:24.311620   23621 system_pods.go:89] "kube-controller-manager-ha-406505-m03" [ab97ec1a-fb7e-42a5-b77c-721ccf85db1d] Running
	I1007 10:49:24.311626   23621 system_pods.go:89] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:49:24.311629   23621 system_pods.go:89] "kube-proxy-c79zf" [2b12aaa5-9560-459b-a3bb-e45e73a6b663] Running
	I1007 10:49:24.311635   23621 system_pods.go:89] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:49:24.311638   23621 system_pods.go:89] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:49:24.311643   23621 system_pods.go:89] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:49:24.311646   23621 system_pods.go:89] "kube-scheduler-ha-406505-m03" [da8d486f-250a-4961-ac7c-b1435c52a3ca] Running
	I1007 10:49:24.311649   23621 system_pods.go:89] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:49:24.311652   23621 system_pods.go:89] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:49:24.311655   23621 system_pods.go:89] "kube-vip-ha-406505-m03" [a90a6084-73a3-476c-9729-1d8b45c6f3fc] Running
	I1007 10:49:24.311658   23621 system_pods.go:89] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:49:24.311664   23621 system_pods.go:126] duration metric: took 207.442478ms to wait for k8s-apps to be running ...
	I1007 10:49:24.311673   23621 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 10:49:24.311718   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:49:24.329372   23621 system_svc.go:56] duration metric: took 17.689597ms WaitForService to wait for kubelet
	I1007 10:49:24.329408   23621 kubeadm.go:582] duration metric: took 23.588563567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:49:24.329431   23621 node_conditions.go:102] verifying NodePressure condition ...
	I1007 10:49:24.498716   23621 request.go:632] Waited for 169.197079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes
	I1007 10:49:24.498772   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes
	I1007 10:49:24.498777   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:24.498785   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:24.498788   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:24.502487   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:24.503651   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:49:24.503669   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:49:24.503680   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:49:24.503684   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:49:24.503688   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:49:24.503691   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:49:24.503697   23621 node_conditions.go:105] duration metric: took 174.259877ms to run NodePressure ...
	I1007 10:49:24.503713   23621 start.go:241] waiting for startup goroutines ...
	I1007 10:49:24.503733   23621 start.go:255] writing updated cluster config ...
	I1007 10:49:24.504082   23621 ssh_runner.go:195] Run: rm -f paused
	I1007 10:49:24.554954   23621 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 10:49:24.557268   23621 out.go:177] * Done! kubectl is now configured to use "ha-406505" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.343762237Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db6c0a2f-bd5a-49b7-aa9c-c46e46fdefc9 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.345133231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2fc35ea5-6ce4-480a-b6b9-6b056794e7df name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.345773465Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298394345746850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fc35ea5-6ce4-480a-b6b9-6b056794e7df name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.346462200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9408e47d-4193-4355-9a24-d48d18d97efa name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.346573654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9408e47d-4193-4355-9a24-d48d18d97efa name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.346917780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9408e47d-4193-4355-9a24-d48d18d97efa name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.389309129Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a5acbdb-34e5-4d82-a5e3-bccedfe4b034 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.389688489Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a5acbdb-34e5-4d82-a5e3-bccedfe4b034 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.391336461Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09bc4101-7a71-4ff2-a719-7bf1b34f4e33 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.391841851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298394391816515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09bc4101-7a71-4ff2-a719-7bf1b34f4e33 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.392296897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a79183b-9e19-4190-aa24-564e3730a67b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.392371516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a79183b-9e19-4190-aa24-564e3730a67b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.392668733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a79183b-9e19-4190-aa24-564e3730a67b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.437920333Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=744b985c-675c-4263-ac0c-a149ca05bbed name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.438044102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=744b985c-675c-4263-ac0c-a149ca05bbed name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.439639173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=168b2e82-63cf-482a-a6fa-4dd41fe500e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.440150190Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298394440123373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=168b2e82-63cf-482a-a6fa-4dd41fe500e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.440840313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9090275-b43a-4475-be3a-3d85e8a7426e name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.440937556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9090275-b43a-4475-be3a-3d85e8a7426e name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.441240929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9090275-b43a-4475-be3a-3d85e8a7426e name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.449211658Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2a7e557-b4ed-404f-8443-433bed82391c name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.449818212Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-tzgjx,Uid:b76f90b1-386b-4eda-966f-2400d6bf4412,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728298167304213439,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T10:49:25.487261096Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:be10b32c-e562-40ef-8b47-04cd1caf9778,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1728298019253313077,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-07T10:46:58.927459721Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-xzc88,Uid:f22736c0-5ca4-4c9b-bcd4-cf95f9390507,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728298019253174253,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T10:46:58.921906033Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-ghmwd,Uid:8d8533b9-192b-49a8-8d17-96ffd98cb729,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1728298019215051273,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-192b-49a8-8d17-96ffd98cb729,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T10:46:58.907951542Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&PodSandboxMetadata{Name:kube-proxy-nlnhf,Uid:053080d5-38da-4108-96aa-f4a8dbe5de91,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728298007038457748,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-10-07T10:46:46.711366491Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&PodSandboxMetadata{Name:kindnet-pt74h,Uid:bb72605c-a772-4b04-a14d-02efe957c9d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728298007036300361,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T10:46:46.719306759Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-406505,Uid:10aaa3e84694103c024dc95a3ae5c57f,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1728297996043896138,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10aaa3e84694103c024dc95a3ae5c57f,kubernetes.io/config.seen: 2024-10-07T10:46:35.558262766Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-406505,Uid:58e0002ddfebe157cb7f0f09bdb94c3e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728297996037338237,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,tier: control-plane,},Ann
otations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.250:8443,kubernetes.io/config.hash: 58e0002ddfebe157cb7f0f09bdb94c3e,kubernetes.io/config.seen: 2024-10-07T10:46:35.558260431Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-406505,Uid:01277ab648416b0c5ac093cf7ea4b7be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728297996033331041,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 01277ab648416b0c5ac093cf7ea4b7be,kubernetes.io/config.seen: 2024-10-07T10:46:35.558261558Z,kubernetes.io/config.source: file,},RuntimeHandler:,
},&PodSandbox{Id:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-406505,Uid:7bdcf35327874f36021578ca054760a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728297996023356334,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{kubernetes.io/config.hash: 7bdcf35327874f36021578ca054760a4,kubernetes.io/config.seen: 2024-10-07T10:46:35.558263881Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&PodSandboxMetadata{Name:etcd-ha-406505,Uid:572e44bb4eeb4579e4fb7c299dd7cd5c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728297996009026893,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-406505,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.250:2379,kubernetes.io/config.hash: 572e44bb4eeb4579e4fb7c299dd7cd5c,kubernetes.io/config.seen: 2024-10-07T10:46:35.558256702Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d2a7e557-b4ed-404f-8443-433bed82391c name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.450946732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61a45dce-d842-4b6a-934f-87793ec5eb16 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.451025080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61a45dce-d842-4b6a-934f-87793ec5eb16 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:14 ha-406505 crio[660]: time="2024-10-07 10:53:14.451351736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61a45dce-d842-4b6a-934f-87793ec5eb16 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4d9a2a1043aa2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   77c3242ae96e0       busybox-7dff88458-tzgjx
	77cd2f018baff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ce1fc89e90c8e       storage-provisioner
	b0cc4a36e486c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   32fee1b9f25d3       coredns-7c65d6cfc9-xzc88
	0ebc4ee6afc90       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   6142c38866566       coredns-7c65d6cfc9-ghmwd
	4abb8ea931227       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   33e535c0eb67f       kindnet-pt74h
	99b7425285dcb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   f6d2bf974f666       kube-proxy-nlnhf
	79eb2653667b5       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   faf0d86acd1e3       kube-vip-ha-406505
	fa4965d1b169f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   77c273367dc31       kube-scheduler-ha-406505
	5b63558545dbd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   de56de352fe21       kube-apiserver-ha-406505
	11a16a81bf6bf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   b351c9fd7630d       etcd-ha-406505
	eb0b61d1fd920       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   c4fb1e79d2379       kube-controller-manager-ha-406505
	
	
	==> coredns [0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136] <==
	[INFO] 10.244.1.2:52141 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000229841s
	[INFO] 10.244.1.2:49387 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177541s
	[INFO] 10.244.1.2:51777 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003610459s
	[INFO] 10.244.1.2:53883 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000188749s
	[INFO] 10.244.2.2:56490 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126634s
	[INFO] 10.244.2.2:39507 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008519s
	[INFO] 10.244.2.2:51465 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085975s
	[INFO] 10.244.2.2:54662 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141674s
	[INFO] 10.244.0.4:60148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114521s
	[INFO] 10.244.0.4:60136 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061595s
	[INFO] 10.244.0.4:58172 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000046455s
	[INFO] 10.244.0.4:37188 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001182047s
	[INFO] 10.244.0.4:43590 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115472s
	[INFO] 10.244.0.4:58012 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033373s
	[INFO] 10.244.1.2:49885 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158136s
	[INFO] 10.244.1.2:37058 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108137s
	[INFO] 10.244.1.2:53254 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014209s
	[INFO] 10.244.2.2:48605 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000226971s
	[INFO] 10.244.0.4:56354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139347s
	[INFO] 10.244.0.4:53408 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091527s
	[INFO] 10.244.1.2:56944 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148755s
	[INFO] 10.244.1.2:35017 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000240968s
	[INFO] 10.244.1.2:60956 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156011s
	[INFO] 10.244.2.2:52452 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151278s
	[INFO] 10.244.0.4:37523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081767s
	
	
	==> coredns [b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12] <==
	[INFO] 10.244.2.2:48222 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000340345s
	[INFO] 10.244.2.2:43370 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001307969s
	[INFO] 10.244.0.4:43661 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000100802s
	[INFO] 10.244.0.4:58476 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001778301s
	[INFO] 10.244.1.2:33672 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201181s
	[INFO] 10.244.1.2:45107 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000305371s
	[INFO] 10.244.2.2:49200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000294988s
	[INFO] 10.244.2.2:49393 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001850366s
	[INFO] 10.244.2.2:48213 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001471137s
	[INFO] 10.244.2.2:60468 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152254s
	[INFO] 10.244.0.4:59551 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001687745s
	[INFO] 10.244.0.4:49859 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044844s
	[INFO] 10.244.1.2:53294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000358207s
	[INFO] 10.244.2.2:48456 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119873s
	[INFO] 10.244.2.2:52623 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000223935s
	[INFO] 10.244.2.2:35737 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161301s
	[INFO] 10.244.0.4:48948 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099818s
	[INFO] 10.244.0.4:38842 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000194312s
	[INFO] 10.244.1.2:52889 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000213247s
	[INFO] 10.244.2.2:54256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000280783s
	[INFO] 10.244.2.2:50232 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000318899s
	[INFO] 10.244.2.2:39214 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147924s
	[INFO] 10.244.0.4:53521 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112358s
	[INFO] 10.244.0.4:49217 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161935s
	[INFO] 10.244.0.4:32867 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109582s
	
	
	==> describe nodes <==
	Name:               ha-406505
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T10_46_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:46:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:53:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-406505
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f87dab03082f46978f270a1d9209ed7f
	  System UUID:                f87dab03-082f-4697-8f27-0a1d9209ed7f
	  Boot ID:                    c90db251-8dbe-47f3-98dd-72c0b5cbd489
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tzgjx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 coredns-7c65d6cfc9-ghmwd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 coredns-7c65d6cfc9-xzc88             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 etcd-ha-406505                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m32s
	  kube-system                 kindnet-pt74h                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m28s
	  kube-system                 kube-apiserver-ha-406505             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-controller-manager-ha-406505    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-proxy-nlnhf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-scheduler-ha-406505             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-vip-ha-406505                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m26s  kube-proxy       
	  Normal  Starting                 6m32s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m32s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m32s  kubelet          Node ha-406505 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m32s  kubelet          Node ha-406505 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m32s  kubelet          Node ha-406505 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m28s  node-controller  Node ha-406505 event: Registered Node ha-406505 in Controller
	  Normal  NodeReady                6m16s  kubelet          Node ha-406505 status is now: NodeReady
	  Normal  RegisteredNode           5m28s  node-controller  Node ha-406505 event: Registered Node ha-406505 in Controller
	  Normal  RegisteredNode           4m9s   node-controller  Node ha-406505 event: Registered Node ha-406505 in Controller
	
	
	Name:               ha-406505-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_47_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:47:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:50:41 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.37
	  Hostname:    ha-406505-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad0b7870a2a54204abf112edd9c072ce
	  System UUID:                ad0b7870-a2a5-4204-abf1-12edd9c072ce
	  Boot ID:                    0b4627e5-d7a2-40a3-9d63-8cae53190740
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bjz2q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 etcd-ha-406505-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m34s
	  kube-system                 kindnet-h8fh4                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m36s
	  kube-system                 kube-apiserver-ha-406505-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-controller-manager-ha-406505-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-proxy-6ng4z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-scheduler-ha-406505-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-vip-ha-406505-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m36s (x8 over 5m36s)  kubelet          Node ha-406505-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s (x8 over 5m36s)  kubelet          Node ha-406505-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m36s (x7 over 5m36s)  kubelet          Node ha-406505-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-406505-m02 event: Registered Node ha-406505-m02 in Controller
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-406505-m02 event: Registered Node ha-406505-m02 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-406505-m02 event: Registered Node ha-406505-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-406505-m02 status is now: NodeNotReady
	
	
	Name:               ha-406505-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_49_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:48:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:53:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:48:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:48:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:48:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:49:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-406505-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75575a7b8eb34e0589ff800419073c6f
	  System UUID:                75575a7b-8eb3-4e05-89ff-800419073c6f
	  Boot ID:                    797c7f20-765b-4e29-a483-d65c033a2625
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ktkg9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 etcd-ha-406505-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m16s
	  kube-system                 kindnet-28vpp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m18s
	  kube-system                 kube-apiserver-ha-406505-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-controller-manager-ha-406505-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-proxy-c79zf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-scheduler-ha-406505-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-vip-ha-406505-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-406505-m03 event: Registered Node ha-406505-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m18s (x8 over 4m18s)  kubelet          Node ha-406505-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s (x8 over 4m18s)  kubelet          Node ha-406505-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s (x7 over 4m18s)  kubelet          Node ha-406505-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-406505-m03 event: Registered Node ha-406505-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-406505-m03 event: Registered Node ha-406505-m03 in Controller
	
	
	Name:               ha-406505-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_50_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:50:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:53:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-406505-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eb4bdac85cb424a99b5076fbfc659b6
	  System UUID:                9eb4bdac-85cb-424a-99b5-076fbfc659b6
	  Boot ID:                    6e48a403-8d50-4a51-beab-d3d8e1e29c60
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cqsll       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m9s
	  kube-system                 kube-proxy-8n5g6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m3s                 kube-proxy       
	  Normal  RegisteredNode           3m9s                 node-controller  Node ha-406505-m04 event: Registered Node ha-406505-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m9s (x2 over 3m9s)  kubelet          Node ha-406505-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m9s (x2 over 3m9s)  kubelet          Node ha-406505-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m9s (x2 over 3m9s)  kubelet          Node ha-406505-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m8s                 node-controller  Node ha-406505-m04 event: Registered Node ha-406505-m04 in Controller
	  Normal  RegisteredNode           3m8s                 node-controller  Node ha-406505-m04 event: Registered Node ha-406505-m04 in Controller
	  Normal  NodeReady                2m48s                kubelet          Node ha-406505-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 10:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051371] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040405] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.858113] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.711350] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.602582] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.722628] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.057663] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056433] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.169114] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.137291] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.300660] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.116084] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.680655] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.069150] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.087227] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.089104] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.196698] kauditd_printk_skb: 31 callbacks suppressed
	[ +11.900338] kauditd_printk_skb: 28 callbacks suppressed
	[Oct 7 10:47] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b] <==
	{"level":"warn","ts":"2024-10-07T10:53:14.352658Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.451952Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.551732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.740782Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.746114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.749382Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.751580Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.758656Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.762990Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.767649Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.777937Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.785521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.793243Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.801607Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.805753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.812193Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.819629Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.826555Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.830952Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.834764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.837523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.840458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.848307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.852680Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:14.856028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:53:14 up 7 min,  0 users,  load average: 0.92, 0.62, 0.28
	Linux ha-406505 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec] <==
	I1007 10:52:38.834704       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	I1007 10:52:48.824984       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I1007 10:52:48.825121       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	I1007 10:52:48.825376       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I1007 10:52:48.825541       1 main.go:299] handling current node
	I1007 10:52:48.825621       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I1007 10:52:48.825668       1 main.go:322] Node ha-406505-m02 has CIDR [10.244.1.0/24] 
	I1007 10:52:48.825793       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I1007 10:52:48.825838       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:52:58.833626       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I1007 10:52:58.833675       1 main.go:299] handling current node
	I1007 10:52:58.833690       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I1007 10:52:58.833695       1 main.go:322] Node ha-406505-m02 has CIDR [10.244.1.0/24] 
	I1007 10:52:58.833864       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I1007 10:52:58.833902       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:52:58.833984       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I1007 10:52:58.834007       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	I1007 10:53:08.831971       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I1007 10:53:08.832046       1 main.go:322] Node ha-406505-m02 has CIDR [10.244.1.0/24] 
	I1007 10:53:08.832167       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I1007 10:53:08.832188       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:53:08.832260       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I1007 10:53:08.832280       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	I1007 10:53:08.832356       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I1007 10:53:08.832375       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46] <==
	W1007 10:46:41.183638       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.250]
	I1007 10:46:41.185270       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 10:46:41.191014       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 10:46:41.276253       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1007 10:46:42.491094       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1007 10:46:42.518362       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1007 10:46:42.533655       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 10:46:46.678876       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1007 10:46:46.902258       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1007 10:49:31.707971       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59314: use of closed network connection
	E1007 10:49:31.903823       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59340: use of closed network connection
	E1007 10:49:32.086294       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59358: use of closed network connection
	E1007 10:49:32.297595       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59380: use of closed network connection
	E1007 10:49:32.498258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59404: use of closed network connection
	E1007 10:49:32.676693       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59420: use of closed network connection
	E1007 10:49:32.859242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59440: use of closed network connection
	E1007 10:49:33.057965       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59468: use of closed network connection
	E1007 10:49:33.240103       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59478: use of closed network connection
	E1007 10:49:33.559788       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59494: use of closed network connection
	E1007 10:49:33.755853       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59504: use of closed network connection
	E1007 10:49:33.944169       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59516: use of closed network connection
	E1007 10:49:34.136074       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59544: use of closed network connection
	E1007 10:49:34.332211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59568: use of closed network connection
	E1007 10:49:34.527795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59588: use of closed network connection
	W1007 10:51:01.196929       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.250]
	
	
	==> kube-controller-manager [eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750] <==
	I1007 10:50:05.605601       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406505-m04\" does not exist"
	I1007 10:50:05.651707       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406505-m04" podCIDRs=["10.244.3.0/24"]
	I1007 10:50:05.651878       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:05.652095       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:05.866588       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.004135       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.156174       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.156822       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406505-m04"
	I1007 10:50:06.254557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.312035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.987679       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:07.073914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:15.971952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:26.980381       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406505-m04"
	I1007 10:50:26.982232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:27.002591       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:27.205853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:36.177995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:51:25.956486       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	I1007 10:51:25.956910       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406505-m04"
	I1007 10:51:25.977091       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	I1007 10:51:26.074899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.887988ms"
	I1007 10:51:26.075025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.368µs"
	I1007 10:51:26.200250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	I1007 10:51:31.167674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	
	
	==> kube-proxy [99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 10:46:47.887571       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 10:46:47.911134       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.250"]
	E1007 10:46:47.911278       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 10:46:47.980015       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 10:46:47.980045       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 10:46:47.980074       1 server_linux.go:169] "Using iptables Proxier"
	I1007 10:46:47.983497       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 10:46:47.984580       1 server.go:483] "Version info" version="v1.31.1"
	I1007 10:46:47.984594       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 10:46:47.987677       1 config.go:199] "Starting service config controller"
	I1007 10:46:47.988455       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 10:46:47.988871       1 config.go:105] "Starting endpoint slice config controller"
	I1007 10:46:47.988960       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 10:46:47.990124       1 config.go:328] "Starting node config controller"
	I1007 10:46:47.990263       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 10:46:48.088926       1 shared_informer.go:320] Caches are synced for service config
	I1007 10:46:48.090118       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 10:46:48.090928       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887] <==
	W1007 10:46:40.575139       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 10:46:40.575275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.704893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 10:46:40.704946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.706026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 10:46:40.706071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.735457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 10:46:40.735594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.745564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 10:46:40.745701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.956352       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 10:46:40.956445       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 10:46:43.102324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1007 10:50:05.717930       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cqsll\": pod kindnet-cqsll is already assigned to node \"ha-406505-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-cqsll" node="ha-406505-m04"
	E1007 10:50:05.719300       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 62093c84-d91b-44ed-a605-198bd057ee89(kube-system/kindnet-cqsll) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-cqsll"
	E1007 10:50:05.719513       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cqsll\": pod kindnet-cqsll is already assigned to node \"ha-406505-m04\"" pod="kube-system/kindnet-cqsll"
	I1007 10:50:05.719601       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cqsll" node="ha-406505-m04"
	E1007 10:50:05.720316       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8n5g6\": pod kube-proxy-8n5g6 is already assigned to node \"ha-406505-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8n5g6" node="ha-406505-m04"
	E1007 10:50:05.724984       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod df46b5c0-261e-4455-bda8-d73ef0b24faa(kube-system/kube-proxy-8n5g6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8n5g6"
	E1007 10:50:05.725159       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8n5g6\": pod kube-proxy-8n5g6 is already assigned to node \"ha-406505-m04\"" pod="kube-system/kube-proxy-8n5g6"
	I1007 10:50:05.725258       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8n5g6" node="ha-406505-m04"
	E1007 10:50:05.734867       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-957n4\": pod kindnet-957n4 is already assigned to node \"ha-406505-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-957n4" node="ha-406505-m04"
	E1007 10:50:05.736396       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9b6e172b-6f7a-48e1-8a89-60f70e5b77f6(kube-system/kindnet-957n4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-957n4"
	E1007 10:50:05.736761       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-957n4\": pod kindnet-957n4 is already assigned to node \"ha-406505-m04\"" pod="kube-system/kindnet-957n4"
	I1007 10:50:05.736855       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-957n4" node="ha-406505-m04"
	
	
	==> kubelet <==
	Oct 07 10:51:42 ha-406505 kubelet[1306]: E1007 10:51:42.610847    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298302610335333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:42 ha-406505 kubelet[1306]: E1007 10:51:42.610884    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298302610335333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:52 ha-406505 kubelet[1306]: E1007 10:51:52.612666    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298312612090878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:52 ha-406505 kubelet[1306]: E1007 10:51:52.612749    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298312612090878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:02 ha-406505 kubelet[1306]: E1007 10:52:02.614917    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298322614471502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:02 ha-406505 kubelet[1306]: E1007 10:52:02.615287    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298322614471502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:12 ha-406505 kubelet[1306]: E1007 10:52:12.617387    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298332617012708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:12 ha-406505 kubelet[1306]: E1007 10:52:12.617780    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298332617012708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:22 ha-406505 kubelet[1306]: E1007 10:52:22.620172    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298342619770777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:22 ha-406505 kubelet[1306]: E1007 10:52:22.620593    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298342619770777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:32 ha-406505 kubelet[1306]: E1007 10:52:32.622744    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298352622225858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:32 ha-406505 kubelet[1306]: E1007 10:52:32.622792    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298352622225858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:42 ha-406505 kubelet[1306]: E1007 10:52:42.472254    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 10:52:42 ha-406505 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 10:52:42 ha-406505 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 10:52:42 ha-406505 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 10:52:42 ha-406505 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 10:52:42 ha-406505 kubelet[1306]: E1007 10:52:42.624989    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298362624467928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:42 ha-406505 kubelet[1306]: E1007 10:52:42.625274    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298362624467928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:52 ha-406505 kubelet[1306]: E1007 10:52:52.627616    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298372626959180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:52 ha-406505 kubelet[1306]: E1007 10:52:52.627689    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298372626959180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:02 ha-406505 kubelet[1306]: E1007 10:53:02.630238    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298382629746151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:02 ha-406505 kubelet[1306]: E1007 10:53:02.630676    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298382629746151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:12 ha-406505 kubelet[1306]: E1007 10:53:12.633509    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298392632773901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:12 ha-406505 kubelet[1306]: E1007 10:53:12.633800    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298392632773901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406505 -n ha-406505
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406505 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr: (4.042852235s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406505 -n ha-406505
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406505 logs -n 25: (1.496116425s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505:/home/docker/cp-test_ha-406505-m03_ha-406505.txt                       |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505 sudo cat                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505.txt                                 |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m04 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp testdata/cp-test.txt                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505:/home/docker/cp-test_ha-406505-m04_ha-406505.txt                       |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505 sudo cat                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505.txt                                 |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03:/home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m03 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-406505 node stop m02 -v=7                                                     | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-406505 node start m02 -v=7                                                    | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:46:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:46:00.685163   23621 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:46:00.685349   23621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:46:00.685361   23621 out.go:358] Setting ErrFile to fd 2...
	I1007 10:46:00.685369   23621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:46:00.685896   23621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:46:00.686526   23621 out.go:352] Setting JSON to false
	I1007 10:46:00.687357   23621 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1655,"bootTime":1728296306,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:46:00.687449   23621 start.go:139] virtualization: kvm guest
	I1007 10:46:00.689739   23621 out.go:177] * [ha-406505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:46:00.691129   23621 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:46:00.691156   23621 notify.go:220] Checking for updates...
	I1007 10:46:00.693697   23621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:46:00.695072   23621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:46:00.696501   23621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:00.697726   23621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:46:00.698926   23621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:46:00.700212   23621 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:46:00.737316   23621 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 10:46:00.738839   23621 start.go:297] selected driver: kvm2
	I1007 10:46:00.738857   23621 start.go:901] validating driver "kvm2" against <nil>
	I1007 10:46:00.738870   23621 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:46:00.739587   23621 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:46:00.739673   23621 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:46:00.755165   23621 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:46:00.755211   23621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 10:46:00.755442   23621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:46:00.755469   23621 cni.go:84] Creating CNI manager for ""
	I1007 10:46:00.755509   23621 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 10:46:00.755520   23621 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 10:46:00.755574   23621 start.go:340] cluster config:
	{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1007 10:46:00.755686   23621 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:46:00.757513   23621 out.go:177] * Starting "ha-406505" primary control-plane node in "ha-406505" cluster
	I1007 10:46:00.758765   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:46:00.758805   23621 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 10:46:00.758823   23621 cache.go:56] Caching tarball of preloaded images
	I1007 10:46:00.758896   23621 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:46:00.758906   23621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:46:00.759224   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:00.759245   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json: {Name:mk9b03e101af877bc71d822d951dd0373d9dda34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:00.759379   23621 start.go:360] acquireMachinesLock for ha-406505: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:46:00.759405   23621 start.go:364] duration metric: took 14.549µs to acquireMachinesLock for "ha-406505"
	I1007 10:46:00.759421   23621 start.go:93] Provisioning new machine with config: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:46:00.759479   23621 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 10:46:00.761273   23621 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 10:46:00.761420   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:00.761466   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:00.775977   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35573
	I1007 10:46:00.776393   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:00.776945   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:00.776968   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:00.777275   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:00.777446   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:00.777589   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:00.777737   23621 start.go:159] libmachine.API.Create for "ha-406505" (driver="kvm2")
	I1007 10:46:00.777767   23621 client.go:168] LocalClient.Create starting
	I1007 10:46:00.777806   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:46:00.777846   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:00.777867   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:00.777925   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:46:00.777949   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:00.777966   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:00.777989   23621 main.go:141] libmachine: Running pre-create checks...
	I1007 10:46:00.778000   23621 main.go:141] libmachine: (ha-406505) Calling .PreCreateCheck
	I1007 10:46:00.778317   23621 main.go:141] libmachine: (ha-406505) Calling .GetConfigRaw
	I1007 10:46:00.778644   23621 main.go:141] libmachine: Creating machine...
	I1007 10:46:00.778656   23621 main.go:141] libmachine: (ha-406505) Calling .Create
	I1007 10:46:00.778771   23621 main.go:141] libmachine: (ha-406505) Creating KVM machine...
	I1007 10:46:00.779972   23621 main.go:141] libmachine: (ha-406505) DBG | found existing default KVM network
	I1007 10:46:00.780650   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:00.780522   23644 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a50}
	I1007 10:46:00.780693   23621 main.go:141] libmachine: (ha-406505) DBG | created network xml: 
	I1007 10:46:00.780713   23621 main.go:141] libmachine: (ha-406505) DBG | <network>
	I1007 10:46:00.780722   23621 main.go:141] libmachine: (ha-406505) DBG |   <name>mk-ha-406505</name>
	I1007 10:46:00.780732   23621 main.go:141] libmachine: (ha-406505) DBG |   <dns enable='no'/>
	I1007 10:46:00.780741   23621 main.go:141] libmachine: (ha-406505) DBG |   
	I1007 10:46:00.780752   23621 main.go:141] libmachine: (ha-406505) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 10:46:00.780763   23621 main.go:141] libmachine: (ha-406505) DBG |     <dhcp>
	I1007 10:46:00.780774   23621 main.go:141] libmachine: (ha-406505) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 10:46:00.780793   23621 main.go:141] libmachine: (ha-406505) DBG |     </dhcp>
	I1007 10:46:00.780806   23621 main.go:141] libmachine: (ha-406505) DBG |   </ip>
	I1007 10:46:00.780813   23621 main.go:141] libmachine: (ha-406505) DBG |   
	I1007 10:46:00.780820   23621 main.go:141] libmachine: (ha-406505) DBG | </network>
	I1007 10:46:00.780827   23621 main.go:141] libmachine: (ha-406505) DBG | 
	I1007 10:46:00.785975   23621 main.go:141] libmachine: (ha-406505) DBG | trying to create private KVM network mk-ha-406505 192.168.39.0/24...
	I1007 10:46:00.849882   23621 main.go:141] libmachine: (ha-406505) DBG | private KVM network mk-ha-406505 192.168.39.0/24 created
	I1007 10:46:00.849911   23621 main.go:141] libmachine: (ha-406505) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505 ...
	I1007 10:46:00.849973   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:00.849860   23644 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:00.850002   23621 main.go:141] libmachine: (ha-406505) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:46:00.850027   23621 main.go:141] libmachine: (ha-406505) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:46:01.096727   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:01.096588   23644 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa...
	I1007 10:46:01.205683   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:01.205510   23644 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/ha-406505.rawdisk...
	I1007 10:46:01.205717   23621 main.go:141] libmachine: (ha-406505) DBG | Writing magic tar header
	I1007 10:46:01.205736   23621 main.go:141] libmachine: (ha-406505) DBG | Writing SSH key tar header
	I1007 10:46:01.205745   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:01.205639   23644 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505 ...
	I1007 10:46:01.205758   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505
	I1007 10:46:01.205765   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:46:01.205774   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505 (perms=drwx------)
	I1007 10:46:01.205782   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:46:01.205789   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:46:01.205799   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:01.205809   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:46:01.205820   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:46:01.205825   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:46:01.205832   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:46:01.205838   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home
	I1007 10:46:01.205845   23621 main.go:141] libmachine: (ha-406505) DBG | Skipping /home - not owner
	I1007 10:46:01.205854   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:46:01.205860   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:46:01.205868   23621 main.go:141] libmachine: (ha-406505) Creating domain...
	I1007 10:46:01.207028   23621 main.go:141] libmachine: (ha-406505) define libvirt domain using xml: 
	I1007 10:46:01.207069   23621 main.go:141] libmachine: (ha-406505) <domain type='kvm'>
	I1007 10:46:01.207077   23621 main.go:141] libmachine: (ha-406505)   <name>ha-406505</name>
	I1007 10:46:01.207082   23621 main.go:141] libmachine: (ha-406505)   <memory unit='MiB'>2200</memory>
	I1007 10:46:01.207087   23621 main.go:141] libmachine: (ha-406505)   <vcpu>2</vcpu>
	I1007 10:46:01.207093   23621 main.go:141] libmachine: (ha-406505)   <features>
	I1007 10:46:01.207097   23621 main.go:141] libmachine: (ha-406505)     <acpi/>
	I1007 10:46:01.207103   23621 main.go:141] libmachine: (ha-406505)     <apic/>
	I1007 10:46:01.207108   23621 main.go:141] libmachine: (ha-406505)     <pae/>
	I1007 10:46:01.207115   23621 main.go:141] libmachine: (ha-406505)     
	I1007 10:46:01.207120   23621 main.go:141] libmachine: (ha-406505)   </features>
	I1007 10:46:01.207124   23621 main.go:141] libmachine: (ha-406505)   <cpu mode='host-passthrough'>
	I1007 10:46:01.207129   23621 main.go:141] libmachine: (ha-406505)   
	I1007 10:46:01.207133   23621 main.go:141] libmachine: (ha-406505)   </cpu>
	I1007 10:46:01.207137   23621 main.go:141] libmachine: (ha-406505)   <os>
	I1007 10:46:01.207141   23621 main.go:141] libmachine: (ha-406505)     <type>hvm</type>
	I1007 10:46:01.207145   23621 main.go:141] libmachine: (ha-406505)     <boot dev='cdrom'/>
	I1007 10:46:01.207150   23621 main.go:141] libmachine: (ha-406505)     <boot dev='hd'/>
	I1007 10:46:01.207154   23621 main.go:141] libmachine: (ha-406505)     <bootmenu enable='no'/>
	I1007 10:46:01.207161   23621 main.go:141] libmachine: (ha-406505)   </os>
	I1007 10:46:01.207186   23621 main.go:141] libmachine: (ha-406505)   <devices>
	I1007 10:46:01.207206   23621 main.go:141] libmachine: (ha-406505)     <disk type='file' device='cdrom'>
	I1007 10:46:01.207220   23621 main.go:141] libmachine: (ha-406505)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/boot2docker.iso'/>
	I1007 10:46:01.207236   23621 main.go:141] libmachine: (ha-406505)       <target dev='hdc' bus='scsi'/>
	I1007 10:46:01.207250   23621 main.go:141] libmachine: (ha-406505)       <readonly/>
	I1007 10:46:01.207259   23621 main.go:141] libmachine: (ha-406505)     </disk>
	I1007 10:46:01.207281   23621 main.go:141] libmachine: (ha-406505)     <disk type='file' device='disk'>
	I1007 10:46:01.207300   23621 main.go:141] libmachine: (ha-406505)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:46:01.207324   23621 main.go:141] libmachine: (ha-406505)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/ha-406505.rawdisk'/>
	I1007 10:46:01.207335   23621 main.go:141] libmachine: (ha-406505)       <target dev='hda' bus='virtio'/>
	I1007 10:46:01.207342   23621 main.go:141] libmachine: (ha-406505)     </disk>
	I1007 10:46:01.207348   23621 main.go:141] libmachine: (ha-406505)     <interface type='network'>
	I1007 10:46:01.207354   23621 main.go:141] libmachine: (ha-406505)       <source network='mk-ha-406505'/>
	I1007 10:46:01.207361   23621 main.go:141] libmachine: (ha-406505)       <model type='virtio'/>
	I1007 10:46:01.207369   23621 main.go:141] libmachine: (ha-406505)     </interface>
	I1007 10:46:01.207381   23621 main.go:141] libmachine: (ha-406505)     <interface type='network'>
	I1007 10:46:01.207395   23621 main.go:141] libmachine: (ha-406505)       <source network='default'/>
	I1007 10:46:01.207406   23621 main.go:141] libmachine: (ha-406505)       <model type='virtio'/>
	I1007 10:46:01.207415   23621 main.go:141] libmachine: (ha-406505)     </interface>
	I1007 10:46:01.207422   23621 main.go:141] libmachine: (ha-406505)     <serial type='pty'>
	I1007 10:46:01.207432   23621 main.go:141] libmachine: (ha-406505)       <target port='0'/>
	I1007 10:46:01.207442   23621 main.go:141] libmachine: (ha-406505)     </serial>
	I1007 10:46:01.207469   23621 main.go:141] libmachine: (ha-406505)     <console type='pty'>
	I1007 10:46:01.207491   23621 main.go:141] libmachine: (ha-406505)       <target type='serial' port='0'/>
	I1007 10:46:01.207513   23621 main.go:141] libmachine: (ha-406505)     </console>
	I1007 10:46:01.207526   23621 main.go:141] libmachine: (ha-406505)     <rng model='virtio'>
	I1007 10:46:01.207539   23621 main.go:141] libmachine: (ha-406505)       <backend model='random'>/dev/random</backend>
	I1007 10:46:01.207548   23621 main.go:141] libmachine: (ha-406505)     </rng>
	I1007 10:46:01.207554   23621 main.go:141] libmachine: (ha-406505)     
	I1007 10:46:01.207563   23621 main.go:141] libmachine: (ha-406505)     
	I1007 10:46:01.207572   23621 main.go:141] libmachine: (ha-406505)   </devices>
	I1007 10:46:01.207587   23621 main.go:141] libmachine: (ha-406505) </domain>
	I1007 10:46:01.207603   23621 main.go:141] libmachine: (ha-406505) 
	I1007 10:46:01.211673   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:76:8f:a7 in network default
	I1007 10:46:01.212309   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:01.212331   23621 main.go:141] libmachine: (ha-406505) Ensuring networks are active...
	I1007 10:46:01.212999   23621 main.go:141] libmachine: (ha-406505) Ensuring network default is active
	I1007 10:46:01.213295   23621 main.go:141] libmachine: (ha-406505) Ensuring network mk-ha-406505 is active
	I1007 10:46:01.213746   23621 main.go:141] libmachine: (ha-406505) Getting domain xml...
	I1007 10:46:01.214325   23621 main.go:141] libmachine: (ha-406505) Creating domain...
	I1007 10:46:02.421940   23621 main.go:141] libmachine: (ha-406505) Waiting to get IP...
	I1007 10:46:02.422559   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:02.422963   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:02.423013   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:02.422950   23644 retry.go:31] will retry after 195.328474ms: waiting for machine to come up
	I1007 10:46:02.620556   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:02.621120   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:02.621158   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:02.621075   23644 retry.go:31] will retry after 387.449002ms: waiting for machine to come up
	I1007 10:46:03.009575   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:03.010111   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:03.010135   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:03.010073   23644 retry.go:31] will retry after 404.721004ms: waiting for machine to come up
	I1007 10:46:03.416746   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:03.417186   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:03.417213   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:03.417138   23644 retry.go:31] will retry after 372.059443ms: waiting for machine to come up
	I1007 10:46:03.790603   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:03.791114   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:03.791143   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:03.791071   23644 retry.go:31] will retry after 494.767467ms: waiting for machine to come up
	I1007 10:46:04.287716   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:04.288192   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:04.288211   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:04.288147   23644 retry.go:31] will retry after 903.556325ms: waiting for machine to come up
	I1007 10:46:05.193010   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:05.193532   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:05.193599   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:05.193453   23644 retry.go:31] will retry after 1.025768675s: waiting for machine to come up
	I1007 10:46:06.220323   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:06.220836   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:06.220866   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:06.220776   23644 retry.go:31] will retry after 1.100294717s: waiting for machine to come up
	I1007 10:46:07.323044   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:07.323554   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:07.323582   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:07.323505   23644 retry.go:31] will retry after 1.146070621s: waiting for machine to come up
	I1007 10:46:08.470888   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:08.471336   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:08.471361   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:08.471279   23644 retry.go:31] will retry after 2.296444266s: waiting for machine to come up
	I1007 10:46:10.768938   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:10.769285   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:10.769343   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:10.769271   23644 retry.go:31] will retry after 2.239094894s: waiting for machine to come up
	I1007 10:46:13.010328   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:13.010763   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:13.010789   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:13.010721   23644 retry.go:31] will retry after 3.13857084s: waiting for machine to come up
	I1007 10:46:16.150462   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:16.150858   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:16.150885   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:16.150808   23644 retry.go:31] will retry after 3.125257266s: waiting for machine to come up
	I1007 10:46:19.280079   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:19.280531   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:19.280561   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:19.280474   23644 retry.go:31] will retry after 5.119838312s: waiting for machine to come up
	I1007 10:46:24.405645   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.406055   23621 main.go:141] libmachine: (ha-406505) Found IP for machine: 192.168.39.250
	I1007 10:46:24.406093   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has current primary IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.406101   23621 main.go:141] libmachine: (ha-406505) Reserving static IP address...
	I1007 10:46:24.406506   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find host DHCP lease matching {name: "ha-406505", mac: "52:54:00:1d:e2:d7", ip: "192.168.39.250"} in network mk-ha-406505
	I1007 10:46:24.482533   23621 main.go:141] libmachine: (ha-406505) DBG | Getting to WaitForSSH function...
	I1007 10:46:24.482567   23621 main.go:141] libmachine: (ha-406505) Reserved static IP address: 192.168.39.250
	I1007 10:46:24.482583   23621 main.go:141] libmachine: (ha-406505) Waiting for SSH to be available...
	I1007 10:46:24.485308   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.485711   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.485764   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.485909   23621 main.go:141] libmachine: (ha-406505) DBG | Using SSH client type: external
	I1007 10:46:24.485935   23621 main.go:141] libmachine: (ha-406505) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa (-rw-------)
	I1007 10:46:24.485971   23621 main.go:141] libmachine: (ha-406505) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:46:24.485988   23621 main.go:141] libmachine: (ha-406505) DBG | About to run SSH command:
	I1007 10:46:24.486003   23621 main.go:141] libmachine: (ha-406505) DBG | exit 0
	I1007 10:46:24.612334   23621 main.go:141] libmachine: (ha-406505) DBG | SSH cmd err, output: <nil>: 
	I1007 10:46:24.612631   23621 main.go:141] libmachine: (ha-406505) KVM machine creation complete!
	I1007 10:46:24.613069   23621 main.go:141] libmachine: (ha-406505) Calling .GetConfigRaw
	I1007 10:46:24.613769   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:24.614010   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:24.614210   23621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:46:24.614233   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:24.615544   23621 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:46:24.615563   23621 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:46:24.615570   23621 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:46:24.615577   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.617899   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.618287   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.618310   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.618494   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.618666   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.618809   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.618921   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.619056   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.619306   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.619320   23621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:46:24.727419   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:46:24.727448   23621 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:46:24.727458   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.730240   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.730602   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.730629   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.730740   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.730937   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.731096   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.731252   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.731417   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.731578   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.731587   23621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:46:24.845378   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:46:24.845478   23621 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:46:24.845490   23621 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:46:24.845498   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:24.845780   23621 buildroot.go:166] provisioning hostname "ha-406505"
	I1007 10:46:24.845810   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:24.846017   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.849059   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.849533   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.849565   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.849690   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.849892   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.850056   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.850226   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.850372   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.850530   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.850541   23621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505 && echo "ha-406505" | sudo tee /etc/hostname
	I1007 10:46:24.974484   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:46:24.974507   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.977334   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.977841   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.977876   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.978053   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.978231   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.978390   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.978528   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.978725   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.978910   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.978926   23621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:46:25.097736   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:46:25.097768   23621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:46:25.097810   23621 buildroot.go:174] setting up certificates
	I1007 10:46:25.097819   23621 provision.go:84] configureAuth start
	I1007 10:46:25.097832   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:25.098143   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:25.100773   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.101119   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.101156   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.101261   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.103487   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.103793   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.103821   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.103966   23621 provision.go:143] copyHostCerts
	I1007 10:46:25.104016   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:46:25.104068   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:46:25.104102   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:46:25.104302   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:46:25.104436   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:46:25.104469   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:46:25.104478   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:46:25.104534   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:46:25.104606   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:46:25.104633   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:46:25.104641   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:46:25.104691   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:46:25.104770   23621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505 san=[127.0.0.1 192.168.39.250 ha-406505 localhost minikube]
	I1007 10:46:25.393470   23621 provision.go:177] copyRemoteCerts
	I1007 10:46:25.393548   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:46:25.393578   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.396327   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.396617   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.396642   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.396839   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.397030   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.397230   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.397379   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:25.482559   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:46:25.482632   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1007 10:46:25.508425   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:46:25.508519   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:46:25.534913   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:46:25.534986   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:46:25.560790   23621 provision.go:87] duration metric: took 462.953383ms to configureAuth
	I1007 10:46:25.560817   23621 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:46:25.560982   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:46:25.561053   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.563730   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.564168   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.564201   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.564402   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.564589   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.564760   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.564923   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.565085   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:25.565253   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:25.565272   23621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:46:25.800362   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:46:25.800389   23621 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:46:25.800397   23621 main.go:141] libmachine: (ha-406505) Calling .GetURL
	I1007 10:46:25.801606   23621 main.go:141] libmachine: (ha-406505) DBG | Using libvirt version 6000000
	I1007 10:46:25.803904   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.804248   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.804273   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.804397   23621 main.go:141] libmachine: Docker is up and running!
	I1007 10:46:25.804414   23621 main.go:141] libmachine: Reticulating splines...
	I1007 10:46:25.804421   23621 client.go:171] duration metric: took 25.026640958s to LocalClient.Create
	I1007 10:46:25.804457   23621 start.go:167] duration metric: took 25.026720726s to libmachine.API.Create "ha-406505"
	I1007 10:46:25.804469   23621 start.go:293] postStartSetup for "ha-406505" (driver="kvm2")
	I1007 10:46:25.804483   23621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:46:25.804519   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:25.804801   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:46:25.804822   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.806847   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.807242   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.807267   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.807402   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.807601   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.807734   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.807837   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:25.896212   23621 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:46:25.901311   23621 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:46:25.901340   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:46:25.901403   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:46:25.901507   23621 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:46:25.901521   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:46:25.901647   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:46:25.912163   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:46:25.940558   23621 start.go:296] duration metric: took 136.073342ms for postStartSetup
	I1007 10:46:25.940602   23621 main.go:141] libmachine: (ha-406505) Calling .GetConfigRaw
	I1007 10:46:25.941179   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:25.943928   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.944270   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.944295   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.944594   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:25.944766   23621 start.go:128] duration metric: took 25.185278256s to createHost
	I1007 10:46:25.944788   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.946920   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.947236   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.947263   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.947390   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.947554   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.947698   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.947796   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.947917   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:25.948107   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:25.948122   23621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:46:26.057285   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728297986.034090654
	
	I1007 10:46:26.057320   23621 fix.go:216] guest clock: 1728297986.034090654
	I1007 10:46:26.057332   23621 fix.go:229] Guest: 2024-10-07 10:46:26.034090654 +0000 UTC Remote: 2024-10-07 10:46:25.944777719 +0000 UTC m=+25.297917279 (delta=89.312935ms)
	I1007 10:46:26.057360   23621 fix.go:200] guest clock delta is within tolerance: 89.312935ms
	I1007 10:46:26.057368   23621 start.go:83] releasing machines lock for "ha-406505", held for 25.297953369s
	I1007 10:46:26.057394   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.057664   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:26.060710   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.061183   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:26.061235   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.061454   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.061984   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.062147   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.062276   23621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:46:26.062317   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:26.062353   23621 ssh_runner.go:195] Run: cat /version.json
	I1007 10:46:26.062375   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:26.065089   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065433   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065561   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:26.065589   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065720   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:26.065828   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:26.065853   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065883   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:26.065971   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:26.066095   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:26.066095   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:26.066234   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:26.066283   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:26.066351   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:26.174687   23621 ssh_runner.go:195] Run: systemctl --version
	I1007 10:46:26.181055   23621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:46:26.339685   23621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:46:26.346234   23621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:46:26.346285   23621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:46:26.362376   23621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:46:26.362399   23621 start.go:495] detecting cgroup driver to use...
	I1007 10:46:26.362452   23621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:46:26.378080   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:46:26.392505   23621 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:46:26.392560   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:46:26.406784   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:46:26.422960   23621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:46:26.552971   23621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:46:26.690240   23621 docker.go:233] disabling docker service ...
	I1007 10:46:26.690309   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:46:26.706428   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:46:26.721025   23621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:46:26.853079   23621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:46:26.978324   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:46:26.994454   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:46:27.014137   23621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:46:27.014198   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.025749   23621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:46:27.025816   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.037748   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.049263   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.062174   23621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:46:27.074940   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.086608   23621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.104859   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.116719   23621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:46:27.127669   23621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:46:27.127745   23621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:46:27.142518   23621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:46:27.153045   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:46:27.275924   23621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:46:27.373391   23621 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:46:27.373475   23621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:46:27.378225   23621 start.go:563] Will wait 60s for crictl version
	I1007 10:46:27.378286   23621 ssh_runner.go:195] Run: which crictl
	I1007 10:46:27.382179   23621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:46:27.423267   23621 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:46:27.423395   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:46:27.453236   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:46:27.483657   23621 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:46:27.484938   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:27.487606   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:27.487998   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:27.488028   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:27.488343   23621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:46:27.492528   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:46:27.506306   23621 kubeadm.go:883] updating cluster {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:46:27.506405   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:46:27.506452   23621 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:46:27.539872   23621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 10:46:27.539951   23621 ssh_runner.go:195] Run: which lz4
	I1007 10:46:27.544145   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 10:46:27.544248   23621 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 10:46:27.549024   23621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 10:46:27.549064   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 10:46:28.958319   23621 crio.go:462] duration metric: took 1.414106826s to copy over tarball
	I1007 10:46:28.958395   23621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 10:46:30.997682   23621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.039251996s)
	I1007 10:46:30.997713   23621 crio.go:469] duration metric: took 2.039368509s to extract the tarball
	I1007 10:46:30.997720   23621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 10:46:31.039009   23621 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:46:31.088841   23621 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:46:31.088866   23621 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:46:31.088873   23621 kubeadm.go:934] updating node { 192.168.39.250 8443 v1.31.1 crio true true} ...
	I1007 10:46:31.089007   23621 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:46:31.089099   23621 ssh_runner.go:195] Run: crio config
	I1007 10:46:31.133611   23621 cni.go:84] Creating CNI manager for ""
	I1007 10:46:31.133634   23621 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 10:46:31.133642   23621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:46:31.133662   23621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406505 NodeName:ha-406505 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:46:31.133799   23621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406505"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:46:31.133825   23621 kube-vip.go:115] generating kube-vip config ...
	I1007 10:46:31.133864   23621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:46:31.150299   23621 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:46:31.150386   23621 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:46:31.150432   23621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:46:31.160704   23621 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:46:31.160771   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 10:46:31.170635   23621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 10:46:31.188233   23621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:46:31.205276   23621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 10:46:31.222191   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1007 10:46:31.240224   23621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:46:31.244214   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:46:31.257345   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:46:31.397967   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:46:31.417027   23621 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.250
	I1007 10:46:31.417077   23621 certs.go:194] generating shared ca certs ...
	I1007 10:46:31.417100   23621 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.417284   23621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:46:31.417383   23621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:46:31.417398   23621 certs.go:256] generating profile certs ...
	I1007 10:46:31.417447   23621 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:46:31.417461   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt with IP's: []
	I1007 10:46:31.468016   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt ...
	I1007 10:46:31.468047   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt: {Name:mk762d603dc2fbb5c1297f6a7a3cc345fce24083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.468271   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key ...
	I1007 10:46:31.468286   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key: {Name:mk7067411a96e86ff81d9c76638d9b65fd88775f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.468374   23621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad
	I1007 10:46:31.468389   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.254]
	I1007 10:46:31.560197   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad ...
	I1007 10:46:31.560235   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad: {Name:mk03ccdd590c02d4a8e3fdabb8ce2b00441c3bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.560434   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad ...
	I1007 10:46:31.560450   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad: {Name:mk9acbd48737ac1a11351bcc3c9e01a19e35889d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.560533   23621 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:46:31.560605   23621 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:46:31.560660   23621 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:46:31.560674   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt with IP's: []
	I1007 10:46:31.824715   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt ...
	I1007 10:46:31.824745   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt: {Name:mk2f87794c4b3ce39df0df4382fd33d9633bb32b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.824924   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key ...
	I1007 10:46:31.824937   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key: {Name:mka71f56202903b2b66df7c3367c064cbfe379ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.825016   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:46:31.825037   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:46:31.825053   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:46:31.825068   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:46:31.825083   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:46:31.825098   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:46:31.825112   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:46:31.825130   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:46:31.825188   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:46:31.825225   23621 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:46:31.825236   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:46:31.825267   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:46:31.825296   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:46:31.825321   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:46:31.825363   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:46:31.825391   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:31.825407   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:46:31.825421   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:46:31.825934   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:46:31.854979   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:46:31.881623   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:46:31.908276   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:46:31.933657   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 10:46:31.959947   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:46:31.985851   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:46:32.010600   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:46:32.035549   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:46:32.060173   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:46:32.084842   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:46:32.110513   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:46:32.129118   23621 ssh_runner.go:195] Run: openssl version
	I1007 10:46:32.134991   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:46:32.146083   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:46:32.150750   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:46:32.150813   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:46:32.156917   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:46:32.167842   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:46:32.179302   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:46:32.184104   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:46:32.184166   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:46:32.189957   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:46:32.203820   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:46:32.218928   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:32.223877   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:32.223932   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:32.234358   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:46:32.254776   23621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:46:32.262324   23621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:46:32.262372   23621 kubeadm.go:392] StartCluster: {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:46:32.262436   23621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:46:32.262503   23621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:46:32.310104   23621 cri.go:89] found id: ""
	I1007 10:46:32.310161   23621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 10:46:32.319996   23621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 10:46:32.329800   23621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 10:46:32.339655   23621 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 10:46:32.339683   23621 kubeadm.go:157] found existing configuration files:
	
	I1007 10:46:32.339722   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 10:46:32.348661   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 10:46:32.348719   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 10:46:32.358855   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 10:46:32.368082   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 10:46:32.368138   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 10:46:32.378072   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 10:46:32.387338   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 10:46:32.387394   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 10:46:32.397186   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 10:46:32.406684   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 10:46:32.406738   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 10:46:32.417090   23621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 10:46:32.545879   23621 kubeadm.go:310] W1007 10:46:32.529591     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:46:32.546834   23621 kubeadm.go:310] W1007 10:46:32.530709     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:46:32.656304   23621 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 10:46:43.090298   23621 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 10:46:43.090373   23621 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 10:46:43.090492   23621 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 10:46:43.090653   23621 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 10:46:43.090862   23621 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 10:46:43.090964   23621 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 10:46:43.092688   23621 out.go:235]   - Generating certificates and keys ...
	I1007 10:46:43.092759   23621 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 10:46:43.092833   23621 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 10:46:43.092901   23621 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 10:46:43.092950   23621 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 10:46:43.092999   23621 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 10:46:43.093054   23621 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 10:46:43.093106   23621 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 10:46:43.093205   23621 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-406505 localhost] and IPs [192.168.39.250 127.0.0.1 ::1]
	I1007 10:46:43.093261   23621 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 10:46:43.093417   23621 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-406505 localhost] and IPs [192.168.39.250 127.0.0.1 ::1]
	I1007 10:46:43.093514   23621 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 10:46:43.093567   23621 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 10:46:43.093623   23621 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 10:46:43.093706   23621 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 10:46:43.093782   23621 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 10:46:43.093856   23621 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 10:46:43.093933   23621 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 10:46:43.094023   23621 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 10:46:43.094096   23621 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 10:46:43.094210   23621 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 10:46:43.094282   23621 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 10:46:43.095798   23621 out.go:235]   - Booting up control plane ...
	I1007 10:46:43.095884   23621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 10:46:43.095971   23621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 10:46:43.096065   23621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 10:46:43.096171   23621 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 10:46:43.096294   23621 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 10:46:43.096350   23621 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 10:46:43.096510   23621 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 10:46:43.096664   23621 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 10:46:43.096745   23621 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.992623ms
	I1007 10:46:43.096840   23621 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 10:46:43.096957   23621 kubeadm.go:310] [api-check] The API server is healthy after 6.063891261s
	I1007 10:46:43.097083   23621 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 10:46:43.097207   23621 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 10:46:43.097264   23621 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 10:46:43.097410   23621 kubeadm.go:310] [mark-control-plane] Marking the node ha-406505 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 10:46:43.097470   23621 kubeadm.go:310] [bootstrap-token] Using token: wypuxz.8mosh3hhf4vr1jtg
	I1007 10:46:43.098950   23621 out.go:235]   - Configuring RBAC rules ...
	I1007 10:46:43.099071   23621 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 10:46:43.099163   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 10:46:43.099343   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 10:46:43.099509   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 10:46:43.099662   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 10:46:43.099752   23621 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 10:46:43.099910   23621 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 10:46:43.099999   23621 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 10:46:43.100092   23621 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 10:46:43.100101   23621 kubeadm.go:310] 
	I1007 10:46:43.100184   23621 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 10:46:43.100194   23621 kubeadm.go:310] 
	I1007 10:46:43.100298   23621 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 10:46:43.100307   23621 kubeadm.go:310] 
	I1007 10:46:43.100344   23621 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 10:46:43.100433   23621 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 10:46:43.100524   23621 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 10:46:43.100533   23621 kubeadm.go:310] 
	I1007 10:46:43.100614   23621 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 10:46:43.100626   23621 kubeadm.go:310] 
	I1007 10:46:43.100698   23621 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 10:46:43.100713   23621 kubeadm.go:310] 
	I1007 10:46:43.100756   23621 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 10:46:43.100822   23621 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 10:46:43.100914   23621 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 10:46:43.100930   23621 kubeadm.go:310] 
	I1007 10:46:43.101035   23621 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 10:46:43.101136   23621 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 10:46:43.101145   23621 kubeadm.go:310] 
	I1007 10:46:43.101255   23621 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wypuxz.8mosh3hhf4vr1jtg \
	I1007 10:46:43.101367   23621 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df \
	I1007 10:46:43.101400   23621 kubeadm.go:310] 	--control-plane 
	I1007 10:46:43.101407   23621 kubeadm.go:310] 
	I1007 10:46:43.101475   23621 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 10:46:43.101485   23621 kubeadm.go:310] 
	I1007 10:46:43.101546   23621 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wypuxz.8mosh3hhf4vr1jtg \
	I1007 10:46:43.101655   23621 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df 
	I1007 10:46:43.101680   23621 cni.go:84] Creating CNI manager for ""
	I1007 10:46:43.101688   23621 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 10:46:43.103490   23621 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 10:46:43.104857   23621 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 10:46:43.110599   23621 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 10:46:43.110619   23621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 10:46:43.132034   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 10:46:43.562211   23621 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 10:46:43.562270   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:43.562324   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406505 minikube.k8s.io/updated_at=2024_10_07T10_46_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=ha-406505 minikube.k8s.io/primary=true
	I1007 10:46:43.616727   23621 ops.go:34] apiserver oom_adj: -16
	I1007 10:46:43.782316   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:44.282755   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:44.782532   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:45.283204   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:45.783063   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:46.283266   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:46.783411   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:46.943992   23621 kubeadm.go:1113] duration metric: took 3.381769921s to wait for elevateKubeSystemPrivileges
	I1007 10:46:46.944035   23621 kubeadm.go:394] duration metric: took 14.681663569s to StartCluster
	I1007 10:46:46.944056   23621 settings.go:142] acquiring lock: {Name:mk699f217216dbe513edf6a42c79fe85f8c20124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:46.944147   23621 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:46:46.945102   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/kubeconfig: {Name:mkc8a5ce1dbafe55e056433fff5c065506f83346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:46.945388   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 10:46:46.945386   23621 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:46:46.945413   23621 start.go:241] waiting for startup goroutines ...
	I1007 10:46:46.945429   23621 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 10:46:46.945523   23621 addons.go:69] Setting storage-provisioner=true in profile "ha-406505"
	I1007 10:46:46.945543   23621 addons.go:234] Setting addon storage-provisioner=true in "ha-406505"
	I1007 10:46:46.945553   23621 addons.go:69] Setting default-storageclass=true in profile "ha-406505"
	I1007 10:46:46.945572   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:46:46.945583   23621 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406505"
	I1007 10:46:46.945607   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:46:46.946008   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.946009   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.946088   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.946051   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.961784   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1007 10:46:46.961861   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42021
	I1007 10:46:46.962343   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.962400   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.962845   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.962858   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.962977   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.962998   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.963231   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.963434   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.963629   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:46.963828   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.963879   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.966424   23621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:46:46.966748   23621 kapi.go:59] client config for ha-406505: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key", CAFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 10:46:46.967326   23621 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 10:46:46.967544   23621 addons.go:234] Setting addon default-storageclass=true in "ha-406505"
	I1007 10:46:46.967595   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:46:46.967974   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.968044   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.980041   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40697
	I1007 10:46:46.980679   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.981275   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.981307   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.981679   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.981861   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:46.982917   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I1007 10:46:46.983418   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.983677   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:46.983888   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.983902   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.984223   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.984726   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.984780   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.985635   23621 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 10:46:46.986794   23621 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:46:46.986811   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 10:46:46.986827   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:46.990137   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:46.990593   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:46.990630   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:46.990792   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:46.990980   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:46.991153   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:46.991295   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:47.000938   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I1007 10:46:47.001317   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:47.001822   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:47.001835   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:47.002157   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:47.002359   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:47.004192   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:47.004381   23621 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 10:46:47.004396   23621 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 10:46:47.004415   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:47.007286   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:47.007709   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:47.007733   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:47.007859   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:47.008018   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:47.008149   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:47.008248   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:47.195335   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 10:46:47.217916   23621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:46:47.332630   23621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 10:46:47.810865   23621 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 10:46:48.064696   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.064705   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.064720   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.064727   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.064985   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.065031   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.065048   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.065053   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.065058   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.064988   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.065100   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.065116   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.065125   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.065104   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.065227   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.065239   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.066429   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.066481   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.066520   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.066607   23621 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 10:46:48.066629   23621 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 10:46:48.066712   23621 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1007 10:46:48.066721   23621 round_trippers.go:469] Request Headers:
	I1007 10:46:48.066729   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:46:48.066749   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:46:48.079736   23621 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1007 10:46:48.080394   23621 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1007 10:46:48.080409   23621 round_trippers.go:469] Request Headers:
	I1007 10:46:48.080417   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:46:48.080421   23621 round_trippers.go:473]     Content-Type: application/json
	I1007 10:46:48.080424   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:46:48.082744   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:46:48.082873   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.082885   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.083166   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.083174   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.083188   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.084834   23621 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 10:46:48.085997   23621 addons.go:510] duration metric: took 1.140572645s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 10:46:48.086031   23621 start.go:246] waiting for cluster config update ...
	I1007 10:46:48.086044   23621 start.go:255] writing updated cluster config ...
	I1007 10:46:48.087964   23621 out.go:201] 
	I1007 10:46:48.089528   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:46:48.089609   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:48.091151   23621 out.go:177] * Starting "ha-406505-m02" control-plane node in "ha-406505" cluster
	I1007 10:46:48.092447   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:46:48.092473   23621 cache.go:56] Caching tarball of preloaded images
	I1007 10:46:48.092563   23621 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:46:48.092574   23621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:46:48.092637   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:48.092794   23621 start.go:360] acquireMachinesLock for ha-406505-m02: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:46:48.092831   23621 start.go:364] duration metric: took 21.347µs to acquireMachinesLock for "ha-406505-m02"
	I1007 10:46:48.092855   23621 start.go:93] Provisioning new machine with config: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:46:48.092915   23621 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1007 10:46:48.094418   23621 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 10:46:48.094505   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:48.094537   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:48.110315   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I1007 10:46:48.110866   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:48.111379   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:48.111403   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:48.111770   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:48.111953   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:46:48.112082   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:46:48.112219   23621 start.go:159] libmachine.API.Create for "ha-406505" (driver="kvm2")
	I1007 10:46:48.112248   23621 client.go:168] LocalClient.Create starting
	I1007 10:46:48.112287   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:46:48.112335   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:48.112356   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:48.112422   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:46:48.112452   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:48.112468   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:48.112494   23621 main.go:141] libmachine: Running pre-create checks...
	I1007 10:46:48.112506   23621 main.go:141] libmachine: (ha-406505-m02) Calling .PreCreateCheck
	I1007 10:46:48.112657   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetConfigRaw
	I1007 10:46:48.113018   23621 main.go:141] libmachine: Creating machine...
	I1007 10:46:48.113031   23621 main.go:141] libmachine: (ha-406505-m02) Calling .Create
	I1007 10:46:48.113183   23621 main.go:141] libmachine: (ha-406505-m02) Creating KVM machine...
	I1007 10:46:48.114398   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found existing default KVM network
	I1007 10:46:48.114519   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found existing private KVM network mk-ha-406505
	I1007 10:46:48.114657   23621 main.go:141] libmachine: (ha-406505-m02) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02 ...
	I1007 10:46:48.114682   23621 main.go:141] libmachine: (ha-406505-m02) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:46:48.114793   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.114651   23988 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:48.114857   23621 main.go:141] libmachine: (ha-406505-m02) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:46:48.352057   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.351887   23988 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa...
	I1007 10:46:48.484305   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.484165   23988 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/ha-406505-m02.rawdisk...
	I1007 10:46:48.484357   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Writing magic tar header
	I1007 10:46:48.484379   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Writing SSH key tar header
	I1007 10:46:48.484391   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.484280   23988 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02 ...
	I1007 10:46:48.484403   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02 (perms=drwx------)
	I1007 10:46:48.484420   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:46:48.484433   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02
	I1007 10:46:48.484444   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:46:48.484459   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:46:48.484478   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:46:48.484491   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:46:48.484510   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:46:48.484523   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:48.484535   23621 main.go:141] libmachine: (ha-406505-m02) Creating domain...
	I1007 10:46:48.484554   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:46:48.484571   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:46:48.484583   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:46:48.484602   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home
	I1007 10:46:48.484618   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Skipping /home - not owner
	I1007 10:46:48.485445   23621 main.go:141] libmachine: (ha-406505-m02) define libvirt domain using xml: 
	I1007 10:46:48.485473   23621 main.go:141] libmachine: (ha-406505-m02) <domain type='kvm'>
	I1007 10:46:48.485489   23621 main.go:141] libmachine: (ha-406505-m02)   <name>ha-406505-m02</name>
	I1007 10:46:48.485497   23621 main.go:141] libmachine: (ha-406505-m02)   <memory unit='MiB'>2200</memory>
	I1007 10:46:48.485528   23621 main.go:141] libmachine: (ha-406505-m02)   <vcpu>2</vcpu>
	I1007 10:46:48.485552   23621 main.go:141] libmachine: (ha-406505-m02)   <features>
	I1007 10:46:48.485563   23621 main.go:141] libmachine: (ha-406505-m02)     <acpi/>
	I1007 10:46:48.485574   23621 main.go:141] libmachine: (ha-406505-m02)     <apic/>
	I1007 10:46:48.485584   23621 main.go:141] libmachine: (ha-406505-m02)     <pae/>
	I1007 10:46:48.485596   23621 main.go:141] libmachine: (ha-406505-m02)     
	I1007 10:46:48.485608   23621 main.go:141] libmachine: (ha-406505-m02)   </features>
	I1007 10:46:48.485625   23621 main.go:141] libmachine: (ha-406505-m02)   <cpu mode='host-passthrough'>
	I1007 10:46:48.485637   23621 main.go:141] libmachine: (ha-406505-m02)   
	I1007 10:46:48.485645   23621 main.go:141] libmachine: (ha-406505-m02)   </cpu>
	I1007 10:46:48.485659   23621 main.go:141] libmachine: (ha-406505-m02)   <os>
	I1007 10:46:48.485671   23621 main.go:141] libmachine: (ha-406505-m02)     <type>hvm</type>
	I1007 10:46:48.485684   23621 main.go:141] libmachine: (ha-406505-m02)     <boot dev='cdrom'/>
	I1007 10:46:48.485699   23621 main.go:141] libmachine: (ha-406505-m02)     <boot dev='hd'/>
	I1007 10:46:48.485712   23621 main.go:141] libmachine: (ha-406505-m02)     <bootmenu enable='no'/>
	I1007 10:46:48.485721   23621 main.go:141] libmachine: (ha-406505-m02)   </os>
	I1007 10:46:48.485801   23621 main.go:141] libmachine: (ha-406505-m02)   <devices>
	I1007 10:46:48.485824   23621 main.go:141] libmachine: (ha-406505-m02)     <disk type='file' device='cdrom'>
	I1007 10:46:48.485840   23621 main.go:141] libmachine: (ha-406505-m02)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/boot2docker.iso'/>
	I1007 10:46:48.485854   23621 main.go:141] libmachine: (ha-406505-m02)       <target dev='hdc' bus='scsi'/>
	I1007 10:46:48.485865   23621 main.go:141] libmachine: (ha-406505-m02)       <readonly/>
	I1007 10:46:48.485875   23621 main.go:141] libmachine: (ha-406505-m02)     </disk>
	I1007 10:46:48.485902   23621 main.go:141] libmachine: (ha-406505-m02)     <disk type='file' device='disk'>
	I1007 10:46:48.485924   23621 main.go:141] libmachine: (ha-406505-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:46:48.485938   23621 main.go:141] libmachine: (ha-406505-m02)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/ha-406505-m02.rawdisk'/>
	I1007 10:46:48.485950   23621 main.go:141] libmachine: (ha-406505-m02)       <target dev='hda' bus='virtio'/>
	I1007 10:46:48.485972   23621 main.go:141] libmachine: (ha-406505-m02)     </disk>
	I1007 10:46:48.485982   23621 main.go:141] libmachine: (ha-406505-m02)     <interface type='network'>
	I1007 10:46:48.485991   23621 main.go:141] libmachine: (ha-406505-m02)       <source network='mk-ha-406505'/>
	I1007 10:46:48.485999   23621 main.go:141] libmachine: (ha-406505-m02)       <model type='virtio'/>
	I1007 10:46:48.486005   23621 main.go:141] libmachine: (ha-406505-m02)     </interface>
	I1007 10:46:48.486013   23621 main.go:141] libmachine: (ha-406505-m02)     <interface type='network'>
	I1007 10:46:48.486025   23621 main.go:141] libmachine: (ha-406505-m02)       <source network='default'/>
	I1007 10:46:48.486034   23621 main.go:141] libmachine: (ha-406505-m02)       <model type='virtio'/>
	I1007 10:46:48.486044   23621 main.go:141] libmachine: (ha-406505-m02)     </interface>
	I1007 10:46:48.486053   23621 main.go:141] libmachine: (ha-406505-m02)     <serial type='pty'>
	I1007 10:46:48.486063   23621 main.go:141] libmachine: (ha-406505-m02)       <target port='0'/>
	I1007 10:46:48.486074   23621 main.go:141] libmachine: (ha-406505-m02)     </serial>
	I1007 10:46:48.486084   23621 main.go:141] libmachine: (ha-406505-m02)     <console type='pty'>
	I1007 10:46:48.486094   23621 main.go:141] libmachine: (ha-406505-m02)       <target type='serial' port='0'/>
	I1007 10:46:48.486098   23621 main.go:141] libmachine: (ha-406505-m02)     </console>
	I1007 10:46:48.486106   23621 main.go:141] libmachine: (ha-406505-m02)     <rng model='virtio'>
	I1007 10:46:48.486122   23621 main.go:141] libmachine: (ha-406505-m02)       <backend model='random'>/dev/random</backend>
	I1007 10:46:48.486134   23621 main.go:141] libmachine: (ha-406505-m02)     </rng>
	I1007 10:46:48.486147   23621 main.go:141] libmachine: (ha-406505-m02)     
	I1007 10:46:48.486157   23621 main.go:141] libmachine: (ha-406505-m02)     
	I1007 10:46:48.486167   23621 main.go:141] libmachine: (ha-406505-m02)   </devices>
	I1007 10:46:48.486184   23621 main.go:141] libmachine: (ha-406505-m02) </domain>
	I1007 10:46:48.486192   23621 main.go:141] libmachine: (ha-406505-m02) 
	I1007 10:46:48.492959   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:11:dc:7d in network default
	I1007 10:46:48.493532   23621 main.go:141] libmachine: (ha-406505-m02) Ensuring networks are active...
	I1007 10:46:48.493555   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:48.494204   23621 main.go:141] libmachine: (ha-406505-m02) Ensuring network default is active
	I1007 10:46:48.494531   23621 main.go:141] libmachine: (ha-406505-m02) Ensuring network mk-ha-406505 is active
	I1007 10:46:48.494994   23621 main.go:141] libmachine: (ha-406505-m02) Getting domain xml...
	I1007 10:46:48.495697   23621 main.go:141] libmachine: (ha-406505-m02) Creating domain...
	I1007 10:46:49.708066   23621 main.go:141] libmachine: (ha-406505-m02) Waiting to get IP...
	I1007 10:46:49.709797   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:49.710242   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:49.710274   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:49.710223   23988 retry.go:31] will retry after 204.773065ms: waiting for machine to come up
	I1007 10:46:49.916620   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:49.917029   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:49.917049   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:49.916992   23988 retry.go:31] will retry after 235.714104ms: waiting for machine to come up
	I1007 10:46:50.154409   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:50.154821   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:50.154854   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:50.154800   23988 retry.go:31] will retry after 473.988416ms: waiting for machine to come up
	I1007 10:46:50.630146   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:50.630593   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:50.630617   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:50.630561   23988 retry.go:31] will retry after 436.51933ms: waiting for machine to come up
	I1007 10:46:51.068126   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:51.068602   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:51.068629   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:51.068593   23988 retry.go:31] will retry after 554.772898ms: waiting for machine to come up
	I1007 10:46:51.625423   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:51.625799   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:51.625821   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:51.625760   23988 retry.go:31] will retry after 790.073775ms: waiting for machine to come up
	I1007 10:46:52.417715   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:52.418041   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:52.418068   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:52.417996   23988 retry.go:31] will retry after 1.143940138s: waiting for machine to come up
	I1007 10:46:53.563665   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:53.564172   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:53.564191   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:53.564119   23988 retry.go:31] will retry after 1.216262675s: waiting for machine to come up
	I1007 10:46:54.782182   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:54.782642   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:54.782668   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:54.782571   23988 retry.go:31] will retry after 1.336251943s: waiting for machine to come up
	I1007 10:46:56.120924   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:56.121343   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:56.121364   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:56.121297   23988 retry.go:31] will retry after 2.26253824s: waiting for machine to come up
	I1007 10:46:58.385702   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:58.386103   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:58.386134   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:58.386057   23988 retry.go:31] will retry after 1.827723489s: waiting for machine to come up
	I1007 10:47:00.215316   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:00.215726   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:47:00.215747   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:47:00.215701   23988 retry.go:31] will retry after 2.599258612s: waiting for machine to come up
	I1007 10:47:02.818331   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:02.818781   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:47:02.818803   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:47:02.818737   23988 retry.go:31] will retry after 3.193038382s: waiting for machine to come up
	I1007 10:47:06.014368   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:06.014784   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:47:06.014809   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:47:06.014743   23988 retry.go:31] will retry after 3.576827994s: waiting for machine to come up
	I1007 10:47:09.593923   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:09.594365   23621 main.go:141] libmachine: (ha-406505-m02) Found IP for machine: 192.168.39.37
	I1007 10:47:09.594385   23621 main.go:141] libmachine: (ha-406505-m02) Reserving static IP address...
	I1007 10:47:09.594399   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has current primary IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:09.594746   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find host DHCP lease matching {name: "ha-406505-m02", mac: "52:54:00:c4:d0:65", ip: "192.168.39.37"} in network mk-ha-406505
	I1007 10:47:09.668479   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Getting to WaitForSSH function...
	I1007 10:47:09.668509   23621 main.go:141] libmachine: (ha-406505-m02) Reserved static IP address: 192.168.39.37
	I1007 10:47:09.668519   23621 main.go:141] libmachine: (ha-406505-m02) Waiting for SSH to be available...
	I1007 10:47:09.670956   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:09.671275   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505
	I1007 10:47:09.671303   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find defined IP address of network mk-ha-406505 interface with MAC address 52:54:00:c4:d0:65
	I1007 10:47:09.671456   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH client type: external
	I1007 10:47:09.671481   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa (-rw-------)
	I1007 10:47:09.671540   23621 main.go:141] libmachine: (ha-406505-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:47:09.671566   23621 main.go:141] libmachine: (ha-406505-m02) DBG | About to run SSH command:
	I1007 10:47:09.671585   23621 main.go:141] libmachine: (ha-406505-m02) DBG | exit 0
	I1007 10:47:09.675078   23621 main.go:141] libmachine: (ha-406505-m02) DBG | SSH cmd err, output: exit status 255: 
	I1007 10:47:09.675099   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 10:47:09.675105   23621 main.go:141] libmachine: (ha-406505-m02) DBG | command : exit 0
	I1007 10:47:09.675110   23621 main.go:141] libmachine: (ha-406505-m02) DBG | err     : exit status 255
	I1007 10:47:09.675118   23621 main.go:141] libmachine: (ha-406505-m02) DBG | output  : 
	I1007 10:47:12.677242   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Getting to WaitForSSH function...
	I1007 10:47:12.679802   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.680241   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:12.680268   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.680410   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH client type: external
	I1007 10:47:12.680433   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa (-rw-------)
	I1007 10:47:12.680466   23621 main.go:141] libmachine: (ha-406505-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:47:12.680481   23621 main.go:141] libmachine: (ha-406505-m02) DBG | About to run SSH command:
	I1007 10:47:12.680494   23621 main.go:141] libmachine: (ha-406505-m02) DBG | exit 0
	I1007 10:47:12.804189   23621 main.go:141] libmachine: (ha-406505-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 10:47:12.804446   23621 main.go:141] libmachine: (ha-406505-m02) KVM machine creation complete!
	I1007 10:47:12.804774   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetConfigRaw
	I1007 10:47:12.805439   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:12.805661   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:12.805843   23621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:47:12.805857   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetState
	I1007 10:47:12.807411   23621 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:47:12.807423   23621 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:47:12.807428   23621 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:47:12.807434   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:12.809666   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.809974   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:12.810001   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.810264   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:12.810464   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.810653   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.810803   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:12.810961   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:12.811169   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:12.811184   23621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:47:12.919372   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:47:12.919420   23621 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:47:12.919430   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:12.922565   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.922966   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:12.922996   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.923171   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:12.923359   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.923510   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.923635   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:12.923785   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:12.923977   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:12.924003   23621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:47:13.033371   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:47:13.033448   23621 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:47:13.033459   23621 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:47:13.033472   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:47:13.033744   23621 buildroot.go:166] provisioning hostname "ha-406505-m02"
	I1007 10:47:13.033784   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:47:13.033956   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.036444   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.036782   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.036811   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.036919   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.037077   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.037212   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.037334   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.037500   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:13.037700   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:13.037718   23621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505-m02 && echo "ha-406505-m02" | sudo tee /etc/hostname
	I1007 10:47:13.163957   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505-m02
	
	I1007 10:47:13.164007   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.166790   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.167220   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.167245   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.167419   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.167615   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.167799   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.167934   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.168112   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:13.168270   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:13.168286   23621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:47:13.289811   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:47:13.289837   23621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:47:13.289852   23621 buildroot.go:174] setting up certificates
	I1007 10:47:13.289860   23621 provision.go:84] configureAuth start
	I1007 10:47:13.289876   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:47:13.290178   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:13.292829   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.293122   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.293145   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.293256   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.296131   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.296632   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.296661   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.296855   23621 provision.go:143] copyHostCerts
	I1007 10:47:13.296886   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:47:13.296917   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:47:13.296926   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:47:13.296997   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:47:13.297093   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:47:13.297110   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:47:13.297114   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:47:13.297137   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:47:13.297178   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:47:13.297193   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:47:13.297199   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:47:13.297219   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:47:13.297264   23621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505-m02 san=[127.0.0.1 192.168.39.37 ha-406505-m02 localhost minikube]
	I1007 10:47:13.470867   23621 provision.go:177] copyRemoteCerts
	I1007 10:47:13.470925   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:47:13.470948   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.473620   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.473865   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.473901   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.474152   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.474379   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.474538   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.474650   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:13.558906   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:47:13.558995   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:47:13.584265   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:47:13.584335   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 10:47:13.609098   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:47:13.609208   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 10:47:13.633989   23621 provision.go:87] duration metric: took 344.11512ms to configureAuth
	I1007 10:47:13.634025   23621 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:47:13.634234   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:47:13.634302   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.636945   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.637279   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.637307   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.637491   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.637663   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.637855   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.638031   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.638190   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:13.638341   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:13.638355   23621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:47:13.873602   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:47:13.873628   23621 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:47:13.873636   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetURL
	I1007 10:47:13.874889   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using libvirt version 6000000
	I1007 10:47:13.877460   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.877837   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.877860   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.878084   23621 main.go:141] libmachine: Docker is up and running!
	I1007 10:47:13.878101   23621 main.go:141] libmachine: Reticulating splines...
	I1007 10:47:13.878109   23621 client.go:171] duration metric: took 25.765852825s to LocalClient.Create
	I1007 10:47:13.878137   23621 start.go:167] duration metric: took 25.765919747s to libmachine.API.Create "ha-406505"
	I1007 10:47:13.878150   23621 start.go:293] postStartSetup for "ha-406505-m02" (driver="kvm2")
	I1007 10:47:13.878166   23621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:47:13.878189   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:13.878390   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:47:13.878411   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.880668   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.881014   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.881044   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.881180   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.881364   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.881519   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.881655   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:13.968514   23621 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:47:13.973091   23621 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:47:13.973116   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:47:13.973185   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:47:13.973262   23621 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:47:13.973272   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:47:13.973349   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:47:13.984972   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:47:14.013706   23621 start.go:296] duration metric: took 135.541721ms for postStartSetup
	I1007 10:47:14.013768   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetConfigRaw
	I1007 10:47:14.014387   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:14.017290   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.017760   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.017791   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.018011   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:47:14.018210   23621 start.go:128] duration metric: took 25.92528673s to createHost
	I1007 10:47:14.018236   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:14.020800   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.021086   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.021115   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.021288   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:14.021489   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.021660   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.021768   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:14.021952   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:14.022115   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:14.022125   23621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:47:14.132989   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298034.110680519
	
	I1007 10:47:14.133013   23621 fix.go:216] guest clock: 1728298034.110680519
	I1007 10:47:14.133022   23621 fix.go:229] Guest: 2024-10-07 10:47:14.110680519 +0000 UTC Remote: 2024-10-07 10:47:14.018221797 +0000 UTC m=+73.371361289 (delta=92.458722ms)
	I1007 10:47:14.133040   23621 fix.go:200] guest clock delta is within tolerance: 92.458722ms
	I1007 10:47:14.133051   23621 start.go:83] releasing machines lock for "ha-406505-m02", held for 26.040206453s
	I1007 10:47:14.133067   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.133299   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:14.135869   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.136305   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.136328   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.139140   23621 out.go:177] * Found network options:
	I1007 10:47:14.140689   23621 out.go:177]   - NO_PROXY=192.168.39.250
	W1007 10:47:14.142083   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:47:14.142129   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.142678   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.142868   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.142974   23621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:47:14.143014   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	W1007 10:47:14.143107   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:47:14.143184   23621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:47:14.143226   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:14.145983   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146148   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146289   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.146315   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146499   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:14.146575   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.146609   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146657   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.146758   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:14.146834   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:14.146877   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.146982   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:14.147039   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:14.147184   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:14.387899   23621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:47:14.394771   23621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:47:14.394848   23621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:47:14.410661   23621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:47:14.410689   23621 start.go:495] detecting cgroup driver to use...
	I1007 10:47:14.410772   23621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:47:14.427868   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:47:14.444153   23621 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:47:14.444206   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:47:14.460223   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:47:14.476365   23621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:47:14.606104   23621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:47:14.745910   23621 docker.go:233] disabling docker service ...
	I1007 10:47:14.745980   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:47:14.760987   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:47:14.774829   23621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:47:14.912287   23621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:47:15.035180   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:47:15.050257   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:47:15.070114   23621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:47:15.070181   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.081232   23621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:47:15.081328   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.097360   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.109085   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.120920   23621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:47:15.132712   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.143857   23621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.162242   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.173052   23621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:47:15.183576   23621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:47:15.183636   23621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:47:15.198592   23621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:47:15.209269   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:47:15.343340   23621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:47:15.435410   23621 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:47:15.435495   23621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:47:15.440650   23621 start.go:563] Will wait 60s for crictl version
	I1007 10:47:15.440716   23621 ssh_runner.go:195] Run: which crictl
	I1007 10:47:15.445010   23621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:47:15.485747   23621 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:47:15.485842   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:47:15.514633   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:47:15.544607   23621 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:47:15.546495   23621 out.go:177]   - env NO_PROXY=192.168.39.250
	I1007 10:47:15.547763   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:15.550503   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:15.550835   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:15.550856   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:15.551135   23621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:47:15.555619   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:47:15.568228   23621 mustload.go:65] Loading cluster: ha-406505
	I1007 10:47:15.568429   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:47:15.568711   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:47:15.568757   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:47:15.583930   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I1007 10:47:15.584453   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:47:15.584977   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:47:15.584999   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:47:15.585308   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:47:15.585449   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:47:15.586928   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:47:15.587242   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:47:15.587291   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:47:15.601672   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I1007 10:47:15.602061   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:47:15.602537   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:47:15.602556   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:47:15.602817   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:47:15.602964   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:47:15.603079   23621 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.37
	I1007 10:47:15.603088   23621 certs.go:194] generating shared ca certs ...
	I1007 10:47:15.603106   23621 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:47:15.603231   23621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:47:15.603292   23621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:47:15.603306   23621 certs.go:256] generating profile certs ...
	I1007 10:47:15.603393   23621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:47:15.603425   23621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39
	I1007 10:47:15.603446   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.254]
	I1007 10:47:15.744161   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39 ...
	I1007 10:47:15.744193   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39: {Name:mkae386a40e79e3b04467f9f82e8cc7ab31669ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:47:15.744370   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39 ...
	I1007 10:47:15.744387   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39: {Name:mkd96b82bea042246d2ff8a9f6d26e46ce2f8d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:47:15.744484   23621 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:47:15.744631   23621 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:47:15.744793   23621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:47:15.744812   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:47:15.744830   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:47:15.744846   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:47:15.744865   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:47:15.744882   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:47:15.744900   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:47:15.744919   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:47:15.744937   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:47:15.745001   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:47:15.745040   23621 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:47:15.745053   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:47:15.745085   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:47:15.745117   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:47:15.745148   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:47:15.745217   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:47:15.745255   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:15.745278   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:47:15.745298   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:47:15.745339   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:47:15.748712   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:15.749114   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:47:15.749137   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:15.749337   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:47:15.749533   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:47:15.749703   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:47:15.749841   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:47:15.828372   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 10:47:15.833129   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 10:47:15.845052   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 10:47:15.849337   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 10:47:15.859666   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 10:47:15.864073   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 10:47:15.882571   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 10:47:15.888480   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1007 10:47:15.901431   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 10:47:15.905968   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 10:47:15.922566   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 10:47:15.927045   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 10:47:15.940895   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:47:15.967974   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:47:15.993940   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:47:16.018147   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:47:16.043434   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 10:47:16.069121   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:47:16.093333   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:47:16.117209   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:47:16.141941   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:47:16.166358   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:47:16.191390   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:47:16.216168   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 10:47:16.233270   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 10:47:16.250510   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 10:47:16.267543   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1007 10:47:16.287073   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 10:47:16.306608   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 10:47:16.324070   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 10:47:16.341221   23621 ssh_runner.go:195] Run: openssl version
	I1007 10:47:16.347150   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:47:16.358131   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:47:16.362824   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:47:16.362874   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:47:16.368599   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:47:16.378927   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:47:16.389775   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:16.394445   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:16.394503   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:16.400151   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:47:16.410835   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:47:16.421451   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:47:16.425954   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:47:16.426044   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:47:16.432023   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:47:16.443765   23621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:47:16.448499   23621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:47:16.448550   23621 kubeadm.go:934] updating node {m02 192.168.39.37 8443 v1.31.1 crio true true} ...
	I1007 10:47:16.448621   23621 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:47:16.448641   23621 kube-vip.go:115] generating kube-vip config ...
	I1007 10:47:16.448674   23621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:47:16.465324   23621 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:47:16.465389   23621 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:47:16.465443   23621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:47:16.476363   23621 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 10:47:16.476434   23621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 10:47:16.487040   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 10:47:16.487085   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:47:16.487142   23621 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1007 10:47:16.487150   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:47:16.487275   23621 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1007 10:47:16.491771   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 10:47:16.491798   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 10:47:17.509026   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:47:17.524363   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:47:17.524452   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:47:17.528672   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 10:47:17.528709   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 10:47:17.599765   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:47:17.599853   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:47:17.612766   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 10:47:17.612810   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 10:47:18.077437   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 10:47:18.088177   23621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 10:47:18.105381   23621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:47:18.122405   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:47:18.142555   23621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:47:18.146470   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:47:18.159594   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:47:18.291092   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:47:18.309170   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:47:18.309657   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:47:18.309712   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:47:18.324913   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I1007 10:47:18.325340   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:47:18.325803   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:47:18.325831   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:47:18.326166   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:47:18.326334   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:47:18.326443   23621 start.go:317] joinCluster: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:47:18.326602   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 10:47:18.326630   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:47:18.329583   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:18.329975   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:47:18.330001   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:18.330140   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:47:18.330306   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:47:18.330451   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:47:18.330595   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:47:18.480055   23621 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:47:18.480129   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hab5tp.p59kud3l77ixefj4 --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m02 --control-plane --apiserver-advertise-address=192.168.39.37 --apiserver-bind-port=8443"
	I1007 10:47:40.053984   23621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hab5tp.p59kud3l77ixefj4 --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m02 --control-plane --apiserver-advertise-address=192.168.39.37 --apiserver-bind-port=8443": (21.573829794s)
	I1007 10:47:40.054022   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 10:47:40.624911   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406505-m02 minikube.k8s.io/updated_at=2024_10_07T10_47_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=ha-406505 minikube.k8s.io/primary=false
	I1007 10:47:40.773203   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-406505-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 10:47:40.895450   23621 start.go:319] duration metric: took 22.569002454s to joinCluster
	I1007 10:47:40.895532   23621 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:47:40.895833   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:47:40.897246   23621 out.go:177] * Verifying Kubernetes components...
	I1007 10:47:40.898575   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:47:41.187385   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:47:41.220775   23621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:47:41.221110   23621 kapi.go:59] client config for ha-406505: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key", CAFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 10:47:41.221195   23621 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.250:8443
	I1007 10:47:41.221469   23621 node_ready.go:35] waiting up to 6m0s for node "ha-406505-m02" to be "Ready" ...
	I1007 10:47:41.221568   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:41.221578   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:41.221589   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:41.221596   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:41.242142   23621 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1007 10:47:41.721789   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:41.721819   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:41.721830   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:41.721836   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:41.725638   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:42.222559   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:42.222582   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:42.222592   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:42.222597   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:42.226807   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:42.722633   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:42.722659   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:42.722670   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:42.722676   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:42.727142   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:43.222278   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:43.222306   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:43.222318   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:43.222325   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:43.225924   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:43.226434   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:43.722388   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:43.722413   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:43.722421   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:43.722426   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:43.726394   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:44.221754   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:44.221782   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:44.221791   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:44.221797   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:44.225377   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:44.722382   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:44.722405   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:44.722415   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:44.722421   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:44.726019   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:45.222002   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:45.222024   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:45.222035   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:45.222042   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:45.228065   23621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 10:47:45.228617   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:45.722139   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:45.722161   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:45.722169   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:45.722174   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:45.726310   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:46.221951   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:46.221984   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:46.221995   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:46.222001   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:46.226108   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:46.722407   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:46.722427   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:46.722434   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:46.722439   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:46.726228   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:47.222433   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:47.222457   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:47.222466   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:47.222471   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:47.226517   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:47.722508   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:47.722532   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:47.722541   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:47.722546   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:47.725944   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:47.726592   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:48.222456   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:48.222477   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:48.222487   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:48.222492   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:48.568208   23621 round_trippers.go:574] Response Status: 200 OK in 345 milliseconds
	I1007 10:47:48.721707   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:48.721729   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:48.721737   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:48.721740   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:48.725191   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:49.222104   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:49.222129   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:49.222137   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:49.222142   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:49.226421   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:49.722572   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:49.722597   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:49.722606   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:49.722610   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:49.726213   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:49.726960   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:50.222350   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:50.222373   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:50.222381   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:50.222384   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:50.226118   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:50.722605   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:50.722631   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:50.722640   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:50.722645   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:50.726160   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:51.221666   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:51.221694   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:51.221714   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:51.221721   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:51.225253   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:51.722133   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:51.722158   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:51.722167   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:51.722171   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:51.725645   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:52.221757   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:52.221780   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:52.221787   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:52.221792   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:52.226043   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:52.226536   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:52.721878   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:52.721905   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:52.721913   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:52.721917   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:52.725379   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:53.221755   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:53.221777   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:53.221786   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:53.221789   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:53.225585   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:53.721883   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:53.721908   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:53.721920   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:53.721925   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:53.725474   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:54.221694   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:54.221720   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:54.221731   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:54.221737   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:54.225868   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:54.226748   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:54.722061   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:54.722086   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:54.722094   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:54.722099   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:54.725979   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:55.221978   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:55.222010   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:55.222019   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:55.222022   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:55.225724   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:55.721884   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:55.721911   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:55.721924   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:55.721931   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:55.726067   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:56.222572   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:56.222595   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:56.222603   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:56.222606   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:56.227082   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:56.227824   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:56.722293   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:56.722317   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:56.722325   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:56.722329   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:56.726068   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:57.222438   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:57.222461   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:57.222469   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:57.222478   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:57.226913   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:57.722050   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:57.722075   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:57.722083   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:57.722087   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:57.726100   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:58.222538   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:58.222560   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:58.222568   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:58.222572   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:58.227033   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:58.722681   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:58.722703   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:58.722711   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:58.722717   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:58.725986   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:58.726597   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:59.221983   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:59.222007   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:59.222015   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:59.222018   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:59.225585   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:59.722632   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:59.722658   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:59.722668   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:59.722672   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:59.726213   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.222316   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:00.222339   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.222347   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.222351   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.225920   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.722449   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:00.722475   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.722484   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.722488   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.725827   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.726434   23621 node_ready.go:49] node "ha-406505-m02" has status "Ready":"True"
	I1007 10:48:00.726454   23621 node_ready.go:38] duration metric: took 19.504967744s for node "ha-406505-m02" to be "Ready" ...
	I1007 10:48:00.726462   23621 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:48:00.726536   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:00.726548   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.726555   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.726559   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.731138   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:00.737911   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.737985   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghmwd
	I1007 10:48:00.737993   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.738001   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.738005   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.741209   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.742237   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:00.742253   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.742260   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.742265   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.745097   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.745537   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.745556   23621 pod_ready.go:82] duration metric: took 7.621102ms for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.745565   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.745629   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xzc88
	I1007 10:48:00.745638   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.745645   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.745650   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.748174   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.748906   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:00.748922   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.748930   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.748936   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.751224   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.751710   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.751731   23621 pod_ready.go:82] duration metric: took 6.159383ms for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.751740   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.751799   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505
	I1007 10:48:00.751809   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.751816   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.751820   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.755074   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.755602   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:00.755617   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.755625   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.755629   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.758258   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.758840   23621 pod_ready.go:93] pod "etcd-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.758864   23621 pod_ready.go:82] duration metric: took 7.117967ms for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.758875   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.758941   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m02
	I1007 10:48:00.758951   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.758962   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.758969   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.761946   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.762531   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:00.762545   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.762555   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.762563   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.765249   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.765990   23621 pod_ready.go:93] pod "etcd-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.766010   23621 pod_ready.go:82] duration metric: took 7.127993ms for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.766024   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.923419   23621 request.go:632] Waited for 157.329652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:48:00.923504   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:48:00.923514   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.923521   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.923526   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.926903   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:01.122872   23621 request.go:632] Waited for 195.370343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.122996   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.123006   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.123014   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.123018   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.126358   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:01.127128   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:01.127149   23621 pod_ready.go:82] duration metric: took 361.118588ms for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.127159   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.322514   23621 request.go:632] Waited for 195.261429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:48:01.322571   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:48:01.322577   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.322584   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.322589   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.326760   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:01.523038   23621 request.go:632] Waited for 195.412644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:01.523093   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:01.523098   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.523105   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.523109   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.527065   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:01.527580   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:01.527599   23621 pod_ready.go:82] duration metric: took 400.432673ms for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.527611   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.722806   23621 request.go:632] Waited for 195.048611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:48:01.722880   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:48:01.722888   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.722898   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.722904   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.727096   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:01.923348   23621 request.go:632] Waited for 195.373775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.923440   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.923452   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.923463   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.923469   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.927522   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:01.927961   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:01.927977   23621 pod_ready.go:82] duration metric: took 400.359633ms for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.928001   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.123092   23621 request.go:632] Waited for 195.004556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:48:02.123150   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:48:02.123157   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.123164   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.123167   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.127404   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:02.323429   23621 request.go:632] Waited for 195.351342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.323503   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.323511   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.323522   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.323532   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.326657   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:02.327382   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:02.327399   23621 pod_ready.go:82] duration metric: took 399.387331ms for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.327409   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.522522   23621 request.go:632] Waited for 195.05566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:48:02.522601   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:48:02.522607   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.522615   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.522620   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.526624   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:02.722785   23621 request.go:632] Waited for 195.392665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.722866   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.722874   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.722885   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.722889   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.726617   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:02.727143   23621 pod_ready.go:93] pod "kube-proxy-6ng4z" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:02.727160   23621 pod_ready.go:82] duration metric: took 399.745226ms for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.727169   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.923398   23621 request.go:632] Waited for 196.154565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:48:02.923464   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:48:02.923473   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.923484   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.923492   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.926698   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.122834   23621 request.go:632] Waited for 195.347405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.122890   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.122897   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.122905   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.122909   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.126570   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.127726   23621 pod_ready.go:93] pod "kube-proxy-nlnhf" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:03.127745   23621 pod_ready.go:82] duration metric: took 400.569818ms for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.127759   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.322923   23621 request.go:632] Waited for 195.092944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:48:03.322991   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:48:03.322997   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.323004   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.323009   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.326336   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.523252   23621 request.go:632] Waited for 196.355286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.523323   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.523328   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.523336   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.523344   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.526876   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.527478   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:03.527506   23621 pod_ready.go:82] duration metric: took 399.737789ms for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.527518   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.722433   23621 request.go:632] Waited for 194.843724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:48:03.722510   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:48:03.722516   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.722524   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.722534   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.726261   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.923306   23621 request.go:632] Waited for 196.357784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:03.923362   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:03.923368   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.923375   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.923379   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.927011   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.927578   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:03.927594   23621 pod_ready.go:82] duration metric: took 400.068935ms for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.927605   23621 pod_ready.go:39] duration metric: took 3.201132108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:48:03.927618   23621 api_server.go:52] waiting for apiserver process to appear ...
	I1007 10:48:03.927663   23621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 10:48:03.942605   23621 api_server.go:72] duration metric: took 23.047005374s to wait for apiserver process to appear ...
	I1007 10:48:03.942635   23621 api_server.go:88] waiting for apiserver healthz status ...
	I1007 10:48:03.942653   23621 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I1007 10:48:03.947020   23621 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I1007 10:48:03.947103   23621 round_trippers.go:463] GET https://192.168.39.250:8443/version
	I1007 10:48:03.947113   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.947126   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.947134   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.948044   23621 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 10:48:03.948143   23621 api_server.go:141] control plane version: v1.31.1
	I1007 10:48:03.948169   23621 api_server.go:131] duration metric: took 5.525857ms to wait for apiserver health ...
	I1007 10:48:03.948178   23621 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 10:48:04.122494   23621 request.go:632] Waited for 174.227541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.122548   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.122554   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.122561   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.122565   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.127425   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:04.131821   23621 system_pods.go:59] 17 kube-system pods found
	I1007 10:48:04.131853   23621 system_pods.go:61] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:48:04.131860   23621 system_pods.go:61] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:48:04.131867   23621 system_pods.go:61] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:48:04.131873   23621 system_pods.go:61] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:48:04.131878   23621 system_pods.go:61] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:48:04.131884   23621 system_pods.go:61] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:48:04.131889   23621 system_pods.go:61] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:48:04.131893   23621 system_pods.go:61] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:48:04.131898   23621 system_pods.go:61] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:48:04.131903   23621 system_pods.go:61] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:48:04.131908   23621 system_pods.go:61] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:48:04.131914   23621 system_pods.go:61] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:48:04.131919   23621 system_pods.go:61] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:48:04.131925   23621 system_pods.go:61] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:48:04.131932   23621 system_pods.go:61] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:48:04.131939   23621 system_pods.go:61] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:48:04.131945   23621 system_pods.go:61] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:48:04.131956   23621 system_pods.go:74] duration metric: took 183.770827ms to wait for pod list to return data ...
	I1007 10:48:04.131966   23621 default_sa.go:34] waiting for default service account to be created ...
	I1007 10:48:04.323406   23621 request.go:632] Waited for 191.335119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:48:04.323466   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:48:04.323474   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.323485   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.323491   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.326946   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:04.327172   23621 default_sa.go:45] found service account: "default"
	I1007 10:48:04.327188   23621 default_sa.go:55] duration metric: took 195.21627ms for default service account to be created ...
	I1007 10:48:04.327195   23621 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 10:48:04.522586   23621 request.go:632] Waited for 195.315471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.522647   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.522653   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.522661   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.522664   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.527722   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:48:04.532291   23621 system_pods.go:86] 17 kube-system pods found
	I1007 10:48:04.532319   23621 system_pods.go:89] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:48:04.532328   23621 system_pods.go:89] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:48:04.532333   23621 system_pods.go:89] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:48:04.532338   23621 system_pods.go:89] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:48:04.532345   23621 system_pods.go:89] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:48:04.532350   23621 system_pods.go:89] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:48:04.532356   23621 system_pods.go:89] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:48:04.532362   23621 system_pods.go:89] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:48:04.532370   23621 system_pods.go:89] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:48:04.532380   23621 system_pods.go:89] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:48:04.532386   23621 system_pods.go:89] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:48:04.532395   23621 system_pods.go:89] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:48:04.532401   23621 system_pods.go:89] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:48:04.532409   23621 system_pods.go:89] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:48:04.532415   23621 system_pods.go:89] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:48:04.532422   23621 system_pods.go:89] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:48:04.532426   23621 system_pods.go:89] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:48:04.532436   23621 system_pods.go:126] duration metric: took 205.234668ms to wait for k8s-apps to be running ...
	I1007 10:48:04.532449   23621 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 10:48:04.532504   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:48:04.548000   23621 system_svc.go:56] duration metric: took 15.524581ms WaitForService to wait for kubelet
	I1007 10:48:04.548032   23621 kubeadm.go:582] duration metric: took 23.652436292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:48:04.548054   23621 node_conditions.go:102] verifying NodePressure condition ...
	I1007 10:48:04.723508   23621 request.go:632] Waited for 175.357529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes
	I1007 10:48:04.723563   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes
	I1007 10:48:04.723568   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.723576   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.723585   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.728067   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:04.728956   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:48:04.728985   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:48:04.728999   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:48:04.729004   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:48:04.729010   23621 node_conditions.go:105] duration metric: took 180.950188ms to run NodePressure ...
	I1007 10:48:04.729032   23621 start.go:241] waiting for startup goroutines ...
	I1007 10:48:04.729064   23621 start.go:255] writing updated cluster config ...
	I1007 10:48:04.731245   23621 out.go:201] 
	I1007 10:48:04.732721   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:48:04.732820   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:48:04.734501   23621 out.go:177] * Starting "ha-406505-m03" control-plane node in "ha-406505" cluster
	I1007 10:48:04.735780   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:48:04.735806   23621 cache.go:56] Caching tarball of preloaded images
	I1007 10:48:04.735908   23621 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:48:04.735925   23621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:48:04.736053   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:48:04.736293   23621 start.go:360] acquireMachinesLock for ha-406505-m03: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:48:04.736354   23621 start.go:364] duration metric: took 34.69µs to acquireMachinesLock for "ha-406505-m03"
	I1007 10:48:04.736376   23621 start.go:93] Provisioning new machine with config: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:48:04.736511   23621 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1007 10:48:04.738190   23621 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 10:48:04.738285   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:04.738332   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:04.754047   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32911
	I1007 10:48:04.754525   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:04.754992   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:04.755012   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:04.755365   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:04.755518   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:04.755655   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:04.755786   23621 start.go:159] libmachine.API.Create for "ha-406505" (driver="kvm2")
	I1007 10:48:04.755817   23621 client.go:168] LocalClient.Create starting
	I1007 10:48:04.755857   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:48:04.755899   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:48:04.755923   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:48:04.755968   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:48:04.755997   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:48:04.756011   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:48:04.756031   23621 main.go:141] libmachine: Running pre-create checks...
	I1007 10:48:04.756042   23621 main.go:141] libmachine: (ha-406505-m03) Calling .PreCreateCheck
	I1007 10:48:04.756216   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetConfigRaw
	I1007 10:48:04.756599   23621 main.go:141] libmachine: Creating machine...
	I1007 10:48:04.756611   23621 main.go:141] libmachine: (ha-406505-m03) Calling .Create
	I1007 10:48:04.756765   23621 main.go:141] libmachine: (ha-406505-m03) Creating KVM machine...
	I1007 10:48:04.757963   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found existing default KVM network
	I1007 10:48:04.758099   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found existing private KVM network mk-ha-406505
	I1007 10:48:04.758232   23621 main.go:141] libmachine: (ha-406505-m03) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03 ...
	I1007 10:48:04.758273   23621 main.go:141] libmachine: (ha-406505-m03) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:48:04.758345   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:04.758258   24407 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:48:04.758425   23621 main.go:141] libmachine: (ha-406505-m03) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:48:05.006754   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:05.006635   24407 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa...
	I1007 10:48:05.394400   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:05.394253   24407 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/ha-406505-m03.rawdisk...
	I1007 10:48:05.394429   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Writing magic tar header
	I1007 10:48:05.394439   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Writing SSH key tar header
	I1007 10:48:05.394459   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:05.394362   24407 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03 ...
	I1007 10:48:05.394475   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03
	I1007 10:48:05.394502   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:48:05.394516   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03 (perms=drwx------)
	I1007 10:48:05.394522   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:48:05.394535   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:48:05.394541   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:48:05.394550   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:48:05.394560   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:48:05.394571   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:48:05.394584   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:48:05.394597   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:48:05.394606   23621 main.go:141] libmachine: (ha-406505-m03) Creating domain...
	I1007 10:48:05.394611   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:48:05.394619   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home
	I1007 10:48:05.394623   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Skipping /home - not owner
	I1007 10:48:05.395724   23621 main.go:141] libmachine: (ha-406505-m03) define libvirt domain using xml: 
	I1007 10:48:05.395761   23621 main.go:141] libmachine: (ha-406505-m03) <domain type='kvm'>
	I1007 10:48:05.395773   23621 main.go:141] libmachine: (ha-406505-m03)   <name>ha-406505-m03</name>
	I1007 10:48:05.395784   23621 main.go:141] libmachine: (ha-406505-m03)   <memory unit='MiB'>2200</memory>
	I1007 10:48:05.395793   23621 main.go:141] libmachine: (ha-406505-m03)   <vcpu>2</vcpu>
	I1007 10:48:05.395802   23621 main.go:141] libmachine: (ha-406505-m03)   <features>
	I1007 10:48:05.395809   23621 main.go:141] libmachine: (ha-406505-m03)     <acpi/>
	I1007 10:48:05.395818   23621 main.go:141] libmachine: (ha-406505-m03)     <apic/>
	I1007 10:48:05.395827   23621 main.go:141] libmachine: (ha-406505-m03)     <pae/>
	I1007 10:48:05.395836   23621 main.go:141] libmachine: (ha-406505-m03)     
	I1007 10:48:05.395844   23621 main.go:141] libmachine: (ha-406505-m03)   </features>
	I1007 10:48:05.395854   23621 main.go:141] libmachine: (ha-406505-m03)   <cpu mode='host-passthrough'>
	I1007 10:48:05.395884   23621 main.go:141] libmachine: (ha-406505-m03)   
	I1007 10:48:05.395909   23621 main.go:141] libmachine: (ha-406505-m03)   </cpu>
	I1007 10:48:05.395940   23621 main.go:141] libmachine: (ha-406505-m03)   <os>
	I1007 10:48:05.395963   23621 main.go:141] libmachine: (ha-406505-m03)     <type>hvm</type>
	I1007 10:48:05.395977   23621 main.go:141] libmachine: (ha-406505-m03)     <boot dev='cdrom'/>
	I1007 10:48:05.396000   23621 main.go:141] libmachine: (ha-406505-m03)     <boot dev='hd'/>
	I1007 10:48:05.396019   23621 main.go:141] libmachine: (ha-406505-m03)     <bootmenu enable='no'/>
	I1007 10:48:05.396035   23621 main.go:141] libmachine: (ha-406505-m03)   </os>
	I1007 10:48:05.396063   23621 main.go:141] libmachine: (ha-406505-m03)   <devices>
	I1007 10:48:05.396094   23621 main.go:141] libmachine: (ha-406505-m03)     <disk type='file' device='cdrom'>
	I1007 10:48:05.396113   23621 main.go:141] libmachine: (ha-406505-m03)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/boot2docker.iso'/>
	I1007 10:48:05.396125   23621 main.go:141] libmachine: (ha-406505-m03)       <target dev='hdc' bus='scsi'/>
	I1007 10:48:05.396137   23621 main.go:141] libmachine: (ha-406505-m03)       <readonly/>
	I1007 10:48:05.396147   23621 main.go:141] libmachine: (ha-406505-m03)     </disk>
	I1007 10:48:05.396159   23621 main.go:141] libmachine: (ha-406505-m03)     <disk type='file' device='disk'>
	I1007 10:48:05.396176   23621 main.go:141] libmachine: (ha-406505-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:48:05.396192   23621 main.go:141] libmachine: (ha-406505-m03)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/ha-406505-m03.rawdisk'/>
	I1007 10:48:05.396207   23621 main.go:141] libmachine: (ha-406505-m03)       <target dev='hda' bus='virtio'/>
	I1007 10:48:05.396219   23621 main.go:141] libmachine: (ha-406505-m03)     </disk>
	I1007 10:48:05.396231   23621 main.go:141] libmachine: (ha-406505-m03)     <interface type='network'>
	I1007 10:48:05.396243   23621 main.go:141] libmachine: (ha-406505-m03)       <source network='mk-ha-406505'/>
	I1007 10:48:05.396258   23621 main.go:141] libmachine: (ha-406505-m03)       <model type='virtio'/>
	I1007 10:48:05.396270   23621 main.go:141] libmachine: (ha-406505-m03)     </interface>
	I1007 10:48:05.396280   23621 main.go:141] libmachine: (ha-406505-m03)     <interface type='network'>
	I1007 10:48:05.396290   23621 main.go:141] libmachine: (ha-406505-m03)       <source network='default'/>
	I1007 10:48:05.396300   23621 main.go:141] libmachine: (ha-406505-m03)       <model type='virtio'/>
	I1007 10:48:05.396309   23621 main.go:141] libmachine: (ha-406505-m03)     </interface>
	I1007 10:48:05.396320   23621 main.go:141] libmachine: (ha-406505-m03)     <serial type='pty'>
	I1007 10:48:05.396337   23621 main.go:141] libmachine: (ha-406505-m03)       <target port='0'/>
	I1007 10:48:05.396351   23621 main.go:141] libmachine: (ha-406505-m03)     </serial>
	I1007 10:48:05.396362   23621 main.go:141] libmachine: (ha-406505-m03)     <console type='pty'>
	I1007 10:48:05.396372   23621 main.go:141] libmachine: (ha-406505-m03)       <target type='serial' port='0'/>
	I1007 10:48:05.396382   23621 main.go:141] libmachine: (ha-406505-m03)     </console>
	I1007 10:48:05.396391   23621 main.go:141] libmachine: (ha-406505-m03)     <rng model='virtio'>
	I1007 10:48:05.396401   23621 main.go:141] libmachine: (ha-406505-m03)       <backend model='random'>/dev/random</backend>
	I1007 10:48:05.396411   23621 main.go:141] libmachine: (ha-406505-m03)     </rng>
	I1007 10:48:05.396418   23621 main.go:141] libmachine: (ha-406505-m03)     
	I1007 10:48:05.396427   23621 main.go:141] libmachine: (ha-406505-m03)     
	I1007 10:48:05.396436   23621 main.go:141] libmachine: (ha-406505-m03)   </devices>
	I1007 10:48:05.396454   23621 main.go:141] libmachine: (ha-406505-m03) </domain>
	I1007 10:48:05.396464   23621 main.go:141] libmachine: (ha-406505-m03) 
	I1007 10:48:05.403522   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:af:df:35 in network default
	I1007 10:48:05.404128   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:05.404146   23621 main.go:141] libmachine: (ha-406505-m03) Ensuring networks are active...
	I1007 10:48:05.404936   23621 main.go:141] libmachine: (ha-406505-m03) Ensuring network default is active
	I1007 10:48:05.405208   23621 main.go:141] libmachine: (ha-406505-m03) Ensuring network mk-ha-406505 is active
	I1007 10:48:05.405622   23621 main.go:141] libmachine: (ha-406505-m03) Getting domain xml...
	I1007 10:48:05.406377   23621 main.go:141] libmachine: (ha-406505-m03) Creating domain...
	I1007 10:48:06.663273   23621 main.go:141] libmachine: (ha-406505-m03) Waiting to get IP...
	I1007 10:48:06.664152   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:06.664559   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:06.664583   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:06.664538   24407 retry.go:31] will retry after 215.584214ms: waiting for machine to come up
	I1007 10:48:06.882094   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:06.882713   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:06.882744   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:06.882654   24407 retry.go:31] will retry after 346.060218ms: waiting for machine to come up
	I1007 10:48:07.229850   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:07.230332   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:07.230440   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:07.230280   24407 retry.go:31] will retry after 442.798208ms: waiting for machine to come up
	I1007 10:48:07.675076   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:07.675596   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:07.675620   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:07.675547   24407 retry.go:31] will retry after 562.649906ms: waiting for machine to come up
	I1007 10:48:08.240324   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:08.240767   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:08.240800   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:08.240736   24407 retry.go:31] will retry after 482.878877ms: waiting for machine to come up
	I1007 10:48:08.725445   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:08.725807   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:08.725869   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:08.725755   24407 retry.go:31] will retry after 616.205186ms: waiting for machine to come up
	I1007 10:48:09.343485   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:09.343941   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:09.344003   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:09.343909   24407 retry.go:31] will retry after 1.040138153s: waiting for machine to come up
	I1007 10:48:10.386253   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:10.386682   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:10.386713   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:10.386637   24407 retry.go:31] will retry after 1.418753496s: waiting for machine to come up
	I1007 10:48:11.807040   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:11.807484   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:11.807521   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:11.807425   24407 retry.go:31] will retry after 1.535016663s: waiting for machine to come up
	I1007 10:48:13.343720   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:13.344267   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:13.344302   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:13.344197   24407 retry.go:31] will retry after 1.769880509s: waiting for machine to come up
	I1007 10:48:15.115316   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:15.115817   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:15.115850   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:15.115759   24407 retry.go:31] will retry after 2.49899664s: waiting for machine to come up
	I1007 10:48:17.617100   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:17.617680   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:17.617710   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:17.617615   24407 retry.go:31] will retry after 2.794854441s: waiting for machine to come up
	I1007 10:48:20.413842   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:20.414235   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:20.414299   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:20.414227   24407 retry.go:31] will retry after 2.870258619s: waiting for machine to come up
	I1007 10:48:23.285865   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:23.286247   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:23.286273   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:23.286205   24407 retry.go:31] will retry after 5.059515205s: waiting for machine to come up
	I1007 10:48:28.350184   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:28.350662   23621 main.go:141] libmachine: (ha-406505-m03) Found IP for machine: 192.168.39.102
	I1007 10:48:28.350688   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has current primary IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:28.350700   23621 main.go:141] libmachine: (ha-406505-m03) Reserving static IP address...
	I1007 10:48:28.351065   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find host DHCP lease matching {name: "ha-406505-m03", mac: "52:54:00:7e:e4:e0", ip: "192.168.39.102"} in network mk-ha-406505
	I1007 10:48:28.431618   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Getting to WaitForSSH function...
	I1007 10:48:28.431646   23621 main.go:141] libmachine: (ha-406505-m03) Reserved static IP address: 192.168.39.102
	I1007 10:48:28.431659   23621 main.go:141] libmachine: (ha-406505-m03) Waiting for SSH to be available...
	I1007 10:48:28.434458   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:28.434796   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505
	I1007 10:48:28.434824   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find defined IP address of network mk-ha-406505 interface with MAC address 52:54:00:7e:e4:e0
	I1007 10:48:28.434975   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH client type: external
	I1007 10:48:28.435007   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa (-rw-------)
	I1007 10:48:28.435035   23621 main.go:141] libmachine: (ha-406505-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:48:28.435054   23621 main.go:141] libmachine: (ha-406505-m03) DBG | About to run SSH command:
	I1007 10:48:28.435085   23621 main.go:141] libmachine: (ha-406505-m03) DBG | exit 0
	I1007 10:48:28.439710   23621 main.go:141] libmachine: (ha-406505-m03) DBG | SSH cmd err, output: exit status 255: 
	I1007 10:48:28.439737   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 10:48:28.439768   23621 main.go:141] libmachine: (ha-406505-m03) DBG | command : exit 0
	I1007 10:48:28.439798   23621 main.go:141] libmachine: (ha-406505-m03) DBG | err     : exit status 255
	I1007 10:48:28.439811   23621 main.go:141] libmachine: (ha-406505-m03) DBG | output  : 
	I1007 10:48:31.440230   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Getting to WaitForSSH function...
	I1007 10:48:31.442839   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.443280   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.443311   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.443446   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH client type: external
	I1007 10:48:31.443482   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa (-rw-------)
	I1007 10:48:31.443520   23621 main.go:141] libmachine: (ha-406505-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:48:31.443544   23621 main.go:141] libmachine: (ha-406505-m03) DBG | About to run SSH command:
	I1007 10:48:31.443556   23621 main.go:141] libmachine: (ha-406505-m03) DBG | exit 0
	I1007 10:48:31.568683   23621 main.go:141] libmachine: (ha-406505-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 10:48:31.568948   23621 main.go:141] libmachine: (ha-406505-m03) KVM machine creation complete!
	I1007 10:48:31.569279   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetConfigRaw
	I1007 10:48:31.569953   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:31.570177   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:31.570345   23621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:48:31.570360   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetState
	I1007 10:48:31.571674   23621 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:48:31.571686   23621 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:48:31.571691   23621 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:48:31.571696   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.574360   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.574751   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.574773   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.574972   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.575161   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.575318   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.575453   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.575630   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.575886   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.575901   23621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:48:31.679615   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:48:31.679639   23621 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:48:31.679646   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.682574   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.682919   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.682944   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.683116   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.683308   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.683480   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.683605   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.683787   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.683977   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.684002   23621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:48:31.789204   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:48:31.789302   23621 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:48:31.789319   23621 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:48:31.789332   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:31.789607   23621 buildroot.go:166] provisioning hostname "ha-406505-m03"
	I1007 10:48:31.789633   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:31.789836   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.792541   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.792898   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.792925   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.793077   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.793430   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.793697   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.793864   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.794038   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.794203   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.794220   23621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505-m03 && echo "ha-406505-m03" | sudo tee /etc/hostname
	I1007 10:48:31.915086   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505-m03
	
	I1007 10:48:31.915117   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.918064   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.918448   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.918486   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.918647   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.918833   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.918992   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.919119   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.919284   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.919488   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.919532   23621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:48:32.033622   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:48:32.033656   23621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:48:32.033671   23621 buildroot.go:174] setting up certificates
	I1007 10:48:32.033679   23621 provision.go:84] configureAuth start
	I1007 10:48:32.033688   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:32.034012   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:32.037059   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.037482   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.037516   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.037674   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.040020   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.040373   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.040394   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.040541   23621 provision.go:143] copyHostCerts
	I1007 10:48:32.040567   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:48:32.040595   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:48:32.040603   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:48:32.040668   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:48:32.040738   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:48:32.040754   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:48:32.040761   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:48:32.040784   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:48:32.040824   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:48:32.040840   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:48:32.040846   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:48:32.040866   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:48:32.040911   23621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505-m03 san=[127.0.0.1 192.168.39.102 ha-406505-m03 localhost minikube]
	I1007 10:48:32.221278   23621 provision.go:177] copyRemoteCerts
	I1007 10:48:32.221329   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:48:32.221355   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.224264   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.224745   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.224771   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.224993   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.225158   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.225327   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.225465   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:32.308320   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:48:32.308394   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:48:32.337349   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:48:32.337427   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 10:48:32.362724   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:48:32.362808   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:48:32.388055   23621 provision.go:87] duration metric: took 354.362269ms to configureAuth
	I1007 10:48:32.388097   23621 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:48:32.388337   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:48:32.388417   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.391464   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.391888   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.391916   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.392130   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.392314   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.392419   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.392546   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.392731   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:32.392934   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:32.392957   23621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:48:32.625746   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:48:32.625778   23621 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:48:32.625788   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetURL
	I1007 10:48:32.627033   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using libvirt version 6000000
	I1007 10:48:32.629153   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.629483   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.629535   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.629659   23621 main.go:141] libmachine: Docker is up and running!
	I1007 10:48:32.629673   23621 main.go:141] libmachine: Reticulating splines...
	I1007 10:48:32.629679   23621 client.go:171] duration metric: took 27.87385173s to LocalClient.Create
	I1007 10:48:32.629697   23621 start.go:167] duration metric: took 27.873912748s to libmachine.API.Create "ha-406505"
	I1007 10:48:32.629707   23621 start.go:293] postStartSetup for "ha-406505-m03" (driver="kvm2")
	I1007 10:48:32.629716   23621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:48:32.629732   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.629961   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:48:32.629987   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.632229   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.632615   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.632638   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.632778   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.632953   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.633107   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.633255   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:32.719017   23621 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:48:32.723755   23621 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:48:32.723780   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:48:32.723839   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:48:32.723945   23621 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:48:32.723957   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:48:32.724071   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:48:32.734023   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:48:32.759071   23621 start.go:296] duration metric: took 129.349571ms for postStartSetup
	I1007 10:48:32.759128   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetConfigRaw
	I1007 10:48:32.759727   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:32.762372   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.762794   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.762825   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.763105   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:48:32.763346   23621 start.go:128] duration metric: took 28.026823197s to createHost
	I1007 10:48:32.763370   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.765734   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.766060   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.766091   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.766305   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.766467   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.766612   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.766764   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.766903   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:32.767070   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:32.767079   23621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:48:32.873753   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298112.851911112
	
	I1007 10:48:32.873779   23621 fix.go:216] guest clock: 1728298112.851911112
	I1007 10:48:32.873789   23621 fix.go:229] Guest: 2024-10-07 10:48:32.851911112 +0000 UTC Remote: 2024-10-07 10:48:32.763358943 +0000 UTC m=+152.116498435 (delta=88.552169ms)
	I1007 10:48:32.873808   23621 fix.go:200] guest clock delta is within tolerance: 88.552169ms
	I1007 10:48:32.873815   23621 start.go:83] releasing machines lock for "ha-406505-m03", held for 28.137449792s
	I1007 10:48:32.873834   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.874113   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:32.877249   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.877618   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.877659   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.879531   23621 out.go:177] * Found network options:
	I1007 10:48:32.880848   23621 out.go:177]   - NO_PROXY=192.168.39.250,192.168.39.37
	W1007 10:48:32.882090   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 10:48:32.882109   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:48:32.882124   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.882710   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.882882   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.882980   23621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:48:32.883020   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	W1007 10:48:32.883028   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 10:48:32.883048   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:48:32.883114   23621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:48:32.883136   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.885892   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886191   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886254   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.886279   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886434   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.886593   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.886690   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.886721   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886723   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.886891   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.886927   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:32.887008   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.887172   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.887336   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:33.125827   23621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:48:33.132836   23621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:48:33.132914   23621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:48:33.152264   23621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:48:33.152289   23621 start.go:495] detecting cgroup driver to use...
	I1007 10:48:33.152363   23621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:48:33.172642   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:48:33.190770   23621 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:48:33.190848   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:48:33.206401   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:48:33.222941   23621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:48:33.363133   23621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:48:33.526409   23621 docker.go:233] disabling docker service ...
	I1007 10:48:33.526475   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:48:33.542837   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:48:33.557673   23621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:48:33.715377   23621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:48:33.847470   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:48:33.862560   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:48:33.884061   23621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:48:33.884116   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.897298   23621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:48:33.897363   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.909096   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.921064   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.932787   23621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:48:33.944724   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.956149   23621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.976708   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.988978   23621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:48:33.999874   23621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:48:33.999940   23621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:48:34.015557   23621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:48:34.026499   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:48:34.149992   23621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:48:34.251227   23621 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:48:34.251293   23621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:48:34.256863   23621 start.go:563] Will wait 60s for crictl version
	I1007 10:48:34.256915   23621 ssh_runner.go:195] Run: which crictl
	I1007 10:48:34.260970   23621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:48:34.301659   23621 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:48:34.301747   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:48:34.332633   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:48:34.367466   23621 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:48:34.369001   23621 out.go:177]   - env NO_PROXY=192.168.39.250
	I1007 10:48:34.370423   23621 out.go:177]   - env NO_PROXY=192.168.39.250,192.168.39.37
	I1007 10:48:34.371711   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:34.374438   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:34.374867   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:34.374897   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:34.375117   23621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:48:34.379896   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:48:34.393502   23621 mustload.go:65] Loading cluster: ha-406505
	I1007 10:48:34.393757   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:48:34.394025   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:34.394061   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:34.411296   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38513
	I1007 10:48:34.411826   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:34.412384   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:34.412408   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:34.412720   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:34.412914   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:48:34.414711   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:48:34.415007   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:34.415055   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:34.431721   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34665
	I1007 10:48:34.432227   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:34.432721   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:34.432743   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:34.433085   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:34.433286   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:48:34.433443   23621 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.102
	I1007 10:48:34.433455   23621 certs.go:194] generating shared ca certs ...
	I1007 10:48:34.433473   23621 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:48:34.433653   23621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:48:34.433694   23621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:48:34.433704   23621 certs.go:256] generating profile certs ...
	I1007 10:48:34.433769   23621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:48:34.433796   23621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af
	I1007 10:48:34.433810   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.102 192.168.39.254]
	I1007 10:48:34.626802   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af ...
	I1007 10:48:34.626838   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af: {Name:mk4dc5899bb034b35a02970b97ee9a5705168f50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:48:34.627028   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af ...
	I1007 10:48:34.627045   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af: {Name:mk33cc429fb28f1dd32077e7c6736b9265eee4dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:48:34.627160   23621 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:48:34.627332   23621 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:48:34.627505   23621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:48:34.627523   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:48:34.627547   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:48:34.627570   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:48:34.627588   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:48:34.627606   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:48:34.627624   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:48:34.627650   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:48:34.648122   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:48:34.648245   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:48:34.648300   23621 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:48:34.648313   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:48:34.648345   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:48:34.648376   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:48:34.648424   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:48:34.649013   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:48:34.649072   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:48:34.649091   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:34.649106   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:48:34.649154   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:48:34.652851   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:34.653287   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:48:34.653319   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:34.653480   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:48:34.653695   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:48:34.653872   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:48:34.653998   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:48:34.732255   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 10:48:34.739182   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 10:48:34.751245   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 10:48:34.755732   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 10:48:34.766849   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 10:48:34.771581   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 10:48:34.783409   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 10:48:34.788150   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1007 10:48:34.799354   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 10:48:34.804283   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 10:48:34.816354   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 10:48:34.821135   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 10:48:34.834977   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:48:34.863883   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:48:34.896166   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:48:34.926479   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:48:34.954664   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 10:48:34.981371   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 10:48:35.009381   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:48:35.036950   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:48:35.063824   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:48:35.091476   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:48:35.119954   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:48:35.148052   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 10:48:35.166363   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 10:48:35.186175   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 10:48:35.205554   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1007 10:48:35.223002   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 10:48:35.240092   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 10:48:35.256797   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 10:48:35.274939   23621 ssh_runner.go:195] Run: openssl version
	I1007 10:48:35.281362   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:48:35.293636   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:48:35.298579   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:48:35.298639   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:48:35.304753   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:48:35.315888   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:48:35.326832   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:35.331554   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:35.331619   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:35.337434   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:48:35.348665   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:48:35.360023   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:48:35.365259   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:48:35.365338   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:48:35.372821   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:48:35.385592   23621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:48:35.390405   23621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:48:35.390455   23621 kubeadm.go:934] updating node {m03 192.168.39.102 8443 v1.31.1 crio true true} ...
	I1007 10:48:35.390529   23621 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:48:35.390554   23621 kube-vip.go:115] generating kube-vip config ...
	I1007 10:48:35.390588   23621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:48:35.407020   23621 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:48:35.407098   23621 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:48:35.407155   23621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:48:35.417610   23621 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 10:48:35.417677   23621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 10:48:35.428405   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 10:48:35.428437   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:48:35.428436   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1007 10:48:35.428474   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1007 10:48:35.428487   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:48:35.428508   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:48:35.428547   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:48:35.428511   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:48:35.446473   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 10:48:35.446517   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 10:48:35.446544   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 10:48:35.446546   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:48:35.446583   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 10:48:35.446648   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:48:35.470883   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 10:48:35.470927   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 10:48:36.357285   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 10:48:36.367780   23621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 10:48:36.389088   23621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:48:36.406417   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:48:36.424782   23621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:48:36.429051   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:48:36.442669   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:48:36.586820   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:48:36.605650   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:48:36.606095   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:36.606145   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:36.622824   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45931
	I1007 10:48:36.623406   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:36.623956   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:36.624010   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:36.624375   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:36.624602   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:48:36.624756   23621 start.go:317] joinCluster: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:48:36.624906   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 10:48:36.624922   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:48:36.628085   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:36.628498   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:48:36.628533   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:36.628663   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:48:36.628842   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:48:36.628992   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:48:36.629138   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:48:36.794813   23621 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:48:36.794869   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gpv0xr.ao0m8qerz0fls7pl --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m03 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443"
	I1007 10:48:59.856325   23621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gpv0xr.ao0m8qerz0fls7pl --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m03 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443": (23.06138473s)
	I1007 10:48:59.856362   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 10:49:00.490810   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406505-m03 minikube.k8s.io/updated_at=2024_10_07T10_49_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=ha-406505 minikube.k8s.io/primary=false
	I1007 10:49:00.615125   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-406505-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 10:49:00.740706   23621 start.go:319] duration metric: took 24.115945375s to joinCluster
	I1007 10:49:00.740808   23621 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:49:00.741314   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:49:00.742651   23621 out.go:177] * Verifying Kubernetes components...
	I1007 10:49:00.744087   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:49:00.980117   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:49:00.996987   23621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:49:00.997383   23621 kapi.go:59] client config for ha-406505: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key", CAFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 10:49:00.997456   23621 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.250:8443
	I1007 10:49:00.997848   23621 node_ready.go:35] waiting up to 6m0s for node "ha-406505-m03" to be "Ready" ...
	I1007 10:49:00.997952   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:00.997963   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:00.997973   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:00.997980   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:01.002879   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:01.498022   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:01.498047   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:01.498058   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:01.498063   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:01.502144   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:01.998532   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:01.998559   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:01.998571   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:01.998580   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:02.002214   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:02.498080   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:02.498113   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:02.498126   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:02.498132   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:02.502433   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:02.998449   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:02.998474   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:02.998482   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:02.998486   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:03.001753   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:03.002481   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:03.498693   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:03.498717   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:03.498727   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:03.498732   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:03.503726   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:03.998977   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:03.999008   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:03.999019   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:03.999026   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:04.002356   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:04.498338   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:04.498365   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:04.498374   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:04.498379   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:04.502295   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:04.998619   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:04.998645   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:04.998656   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:04.998660   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:05.001641   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:05.498634   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:05.498660   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:05.498671   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:05.498677   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:05.502156   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:05.502885   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:05.998723   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:05.998794   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:05.998812   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:05.998818   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:06.003873   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:49:06.499098   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:06.499119   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:06.499126   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:06.499131   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:06.503089   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:06.998553   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:06.998587   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:06.998595   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:06.998599   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:07.002580   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:07.498710   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:07.498736   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:07.498746   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:07.498751   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:07.502124   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:07.502967   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:07.998236   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:07.998258   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:07.998267   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:07.998271   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:08.001970   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:08.498896   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:08.498918   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:08.498927   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:08.498931   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:08.502697   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:08.998532   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:08.998561   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:08.998571   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:08.998578   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:09.002002   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:09.498039   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:09.498064   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:09.498077   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:09.498084   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:09.502005   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:09.998852   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:09.998879   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:09.998887   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:09.998893   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:10.002735   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:10.003524   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:10.499000   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:10.499026   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:10.499034   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:10.499046   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:10.502792   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:10.998624   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:10.998647   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:10.998659   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:10.998663   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:11.002342   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:11.498150   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:11.498177   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:11.498186   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:11.498193   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:11.502277   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:11.998714   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:11.998735   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:11.998743   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:11.998748   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:12.002263   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:12.498755   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:12.498782   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:12.498794   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:12.498801   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:12.502981   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:12.503718   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:12.999042   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:12.999069   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:12.999079   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:12.999085   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:13.002464   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:13.498077   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:13.498101   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:13.498110   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:13.498115   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:13.501652   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:13.998309   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:13.998332   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:13.998343   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:13.998347   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:14.001704   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:14.498713   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:14.498734   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:14.498742   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:14.498745   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:14.502719   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:14.999025   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:14.999047   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:14.999055   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:14.999059   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:15.002812   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:15.003362   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:15.498817   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:15.498839   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:15.498846   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:15.498850   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:15.504009   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:49:15.998456   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:15.998477   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:15.998485   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:15.998488   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:16.001780   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:16.498830   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:16.498857   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:16.498868   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:16.498873   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:16.502631   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:16.998224   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:16.998257   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:16.998268   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:16.998274   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:17.001615   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:17.498645   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:17.498672   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:17.498684   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:17.498688   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:17.502201   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:17.502837   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:17.998189   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:17.998213   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:17.998220   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:17.998226   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.001816   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.498415   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:18.498450   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.498462   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.498469   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.502015   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.502523   23621 node_ready.go:49] node "ha-406505-m03" has status "Ready":"True"
	I1007 10:49:18.502543   23621 node_ready.go:38] duration metric: took 17.504667395s for node "ha-406505-m03" to be "Ready" ...
	I1007 10:49:18.502551   23621 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:49:18.502632   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:18.502642   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.502650   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.502656   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.509327   23621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 10:49:18.518372   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.518459   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghmwd
	I1007 10:49:18.518464   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.518472   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.518479   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.521616   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.522356   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:18.522371   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.522378   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.522382   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.524976   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.525512   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.525532   23621 pod_ready.go:82] duration metric: took 7.133708ms for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.525541   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.525593   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xzc88
	I1007 10:49:18.525602   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.525608   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.525612   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.528321   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.529035   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:18.529049   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.529055   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.529058   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.531646   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.532124   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.532141   23621 pod_ready.go:82] duration metric: took 6.593928ms for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.532153   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.532225   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505
	I1007 10:49:18.532234   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.532244   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.532249   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.534614   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.535248   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:18.535264   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.535274   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.535279   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.537970   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.538368   23621 pod_ready.go:93] pod "etcd-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.538387   23621 pod_ready.go:82] duration metric: took 6.225816ms for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.538401   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.538461   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m02
	I1007 10:49:18.538472   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.538483   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.538491   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.541748   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.542359   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:18.542377   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.542389   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.542397   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.545668   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.546089   23621 pod_ready.go:93] pod "etcd-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.546104   23621 pod_ready.go:82] duration metric: took 7.695818ms for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.546113   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.698417   23621 request.go:632] Waited for 152.247174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m03
	I1007 10:49:18.698479   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m03
	I1007 10:49:18.698485   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.698492   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.698497   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.702261   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.899482   23621 request.go:632] Waited for 196.389358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:18.899569   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:18.899582   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.899593   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.899603   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.903728   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:18.904256   23621 pod_ready.go:93] pod "etcd-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.904275   23621 pod_ready.go:82] duration metric: took 358.156028ms for pod "etcd-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.904291   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.099454   23621 request.go:632] Waited for 195.101714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:49:19.099547   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:49:19.099559   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.099569   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.099575   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.103611   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:19.298735   23621 request.go:632] Waited for 194.375211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:19.298818   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:19.298825   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.298837   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.298856   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.302548   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:19.303053   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:19.303069   23621 pod_ready.go:82] duration metric: took 398.772541ms for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.303079   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.499176   23621 request.go:632] Waited for 196.018641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:49:19.499270   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:49:19.499283   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.499296   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.499309   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.503085   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:19.699374   23621 request.go:632] Waited for 195.380837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:19.699426   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:19.699432   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.699439   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.699443   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.703099   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:19.703625   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:19.703644   23621 pod_ready.go:82] duration metric: took 400.557163ms for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.703654   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.899212   23621 request.go:632] Waited for 195.494385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m03
	I1007 10:49:19.899266   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m03
	I1007 10:49:19.899271   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.899283   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.899289   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.902896   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.098927   23621 request.go:632] Waited for 195.376619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:20.098987   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:20.098993   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.099000   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.099004   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.102179   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.102740   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:20.102763   23621 pod_ready.go:82] duration metric: took 399.102679ms for pod "kube-apiserver-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.102773   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.298944   23621 request.go:632] Waited for 196.089064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:49:20.299004   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:49:20.299010   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.299017   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.299023   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.302867   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.498409   23621 request.go:632] Waited for 194.294244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:20.498569   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:20.498582   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.498592   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.498599   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.502204   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.503003   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:20.503027   23621 pod_ready.go:82] duration metric: took 400.247835ms for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.503037   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.699318   23621 request.go:632] Waited for 196.218592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:49:20.699394   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:49:20.699405   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.699415   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.699424   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.702950   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.899287   23621 request.go:632] Waited for 195.402635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:20.899343   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:20.899349   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.899370   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.899375   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.903339   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.904141   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:20.904160   23621 pod_ready.go:82] duration metric: took 401.116067ms for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.904170   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.099320   23621 request.go:632] Waited for 195.054621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m03
	I1007 10:49:21.099383   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m03
	I1007 10:49:21.099391   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.099404   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.099415   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.103012   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.299153   23621 request.go:632] Waited for 195.377964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:21.299213   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:21.299218   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.299225   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.299229   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.303015   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.303516   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:21.303534   23621 pod_ready.go:82] duration metric: took 399.355676ms for pod "kube-controller-manager-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.303543   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.498530   23621 request.go:632] Waited for 194.920994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:49:21.498597   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:49:21.498603   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.498610   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.498614   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.502242   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.699351   23621 request.go:632] Waited for 196.362706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:21.699418   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:21.699423   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.699431   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.699435   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.702722   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.703412   23621 pod_ready.go:93] pod "kube-proxy-6ng4z" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:21.703429   23621 pod_ready.go:82] duration metric: took 399.878679ms for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.703439   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c79zf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.898495   23621 request.go:632] Waited for 195.001064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c79zf
	I1007 10:49:21.898570   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c79zf
	I1007 10:49:21.898576   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.898583   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.898587   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.903113   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:22.099311   23621 request.go:632] Waited for 195.352243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:22.099376   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:22.099384   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.099392   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.099397   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.102668   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.103269   23621 pod_ready.go:93] pod "kube-proxy-c79zf" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:22.103284   23621 pod_ready.go:82] duration metric: took 399.838704ms for pod "kube-proxy-c79zf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.103298   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.299438   23621 request.go:632] Waited for 196.048125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:49:22.299517   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:49:22.299528   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.299539   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.299548   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.303349   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.499362   23621 request.go:632] Waited for 195.369323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.499426   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.499434   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.499445   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.499452   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.503812   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:22.504569   23621 pod_ready.go:93] pod "kube-proxy-nlnhf" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:22.504595   23621 pod_ready.go:82] duration metric: took 401.287955ms for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.504608   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.698460   23621 request.go:632] Waited for 193.785531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:49:22.698548   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:49:22.698557   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.698568   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.698578   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.702017   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.898981   23621 request.go:632] Waited for 196.377795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.899067   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.899078   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.899089   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.899095   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.902303   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.903166   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:22.903182   23621 pod_ready.go:82] duration metric: took 398.566323ms for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.903191   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.099385   23621 request.go:632] Waited for 196.133679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:49:23.099448   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:49:23.099455   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.099466   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.099472   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.102786   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.298901   23621 request.go:632] Waited for 195.266193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:23.298979   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:23.299002   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.299017   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.299025   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.302232   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.302790   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:23.302809   23621 pod_ready.go:82] duration metric: took 399.610952ms for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.302821   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.499180   23621 request.go:632] Waited for 196.292359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m03
	I1007 10:49:23.499272   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m03
	I1007 10:49:23.499287   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.499297   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.499301   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.502869   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.699193   23621 request.go:632] Waited for 195.355503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:23.699258   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:23.699265   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.699273   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.699279   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.703084   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.703667   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:23.703685   23621 pod_ready.go:82] duration metric: took 400.856999ms for pod "kube-scheduler-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.703698   23621 pod_ready.go:39] duration metric: took 5.201137337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:49:23.703714   23621 api_server.go:52] waiting for apiserver process to appear ...
	I1007 10:49:23.703771   23621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 10:49:23.720988   23621 api_server.go:72] duration metric: took 22.980139715s to wait for apiserver process to appear ...
	I1007 10:49:23.721017   23621 api_server.go:88] waiting for apiserver healthz status ...
	I1007 10:49:23.721038   23621 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I1007 10:49:23.727765   23621 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I1007 10:49:23.727841   23621 round_trippers.go:463] GET https://192.168.39.250:8443/version
	I1007 10:49:23.727846   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.727855   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.727860   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.728928   23621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1007 10:49:23.729002   23621 api_server.go:141] control plane version: v1.31.1
	I1007 10:49:23.729019   23621 api_server.go:131] duration metric: took 7.995236ms to wait for apiserver health ...
	I1007 10:49:23.729029   23621 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 10:49:23.899405   23621 request.go:632] Waited for 170.304588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:23.899474   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:23.899479   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.899494   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.899501   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.905647   23621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 10:49:23.912018   23621 system_pods.go:59] 24 kube-system pods found
	I1007 10:49:23.912046   23621 system_pods.go:61] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:49:23.912051   23621 system_pods.go:61] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:49:23.912055   23621 system_pods.go:61] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:49:23.912059   23621 system_pods.go:61] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:49:23.912064   23621 system_pods.go:61] "etcd-ha-406505-m03" [2c0079fb-51f1-423c-8b4c-893824342cd6] Running
	I1007 10:49:23.912069   23621 system_pods.go:61] "kindnet-28vpp" [c14e8bdf-ebc5-4349-adb4-6786cd15551d] Running
	I1007 10:49:23.912074   23621 system_pods.go:61] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:49:23.912079   23621 system_pods.go:61] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:49:23.912087   23621 system_pods.go:61] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:49:23.912092   23621 system_pods.go:61] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:49:23.912101   23621 system_pods.go:61] "kube-apiserver-ha-406505-m03" [8bc80684-cd9a-40b1-94e1-02cb77917c36] Running
	I1007 10:49:23.912106   23621 system_pods.go:61] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:49:23.912111   23621 system_pods.go:61] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:49:23.912116   23621 system_pods.go:61] "kube-controller-manager-ha-406505-m03" [ab97ec1a-fb7e-42a5-b77c-721ccf85db1d] Running
	I1007 10:49:23.912120   23621 system_pods.go:61] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:49:23.912123   23621 system_pods.go:61] "kube-proxy-c79zf" [2b12aaa5-9560-459b-a3bb-e45e73a6b663] Running
	I1007 10:49:23.912129   23621 system_pods.go:61] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:49:23.912132   23621 system_pods.go:61] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:49:23.912135   23621 system_pods.go:61] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:49:23.912139   23621 system_pods.go:61] "kube-scheduler-ha-406505-m03" [da8d486f-250a-4961-ac7c-b1435c52a3ca] Running
	I1007 10:49:23.912147   23621 system_pods.go:61] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:49:23.912152   23621 system_pods.go:61] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:49:23.912155   23621 system_pods.go:61] "kube-vip-ha-406505-m03" [a90a6084-73a3-476c-9729-1d8b45c6f3fc] Running
	I1007 10:49:23.912160   23621 system_pods.go:61] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:49:23.912167   23621 system_pods.go:74] duration metric: took 183.129229ms to wait for pod list to return data ...
	I1007 10:49:23.912178   23621 default_sa.go:34] waiting for default service account to be created ...
	I1007 10:49:24.099457   23621 request.go:632] Waited for 187.192356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:49:24.099519   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:49:24.099524   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:24.099532   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:24.099538   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:24.104028   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:24.104180   23621 default_sa.go:45] found service account: "default"
	I1007 10:49:24.104202   23621 default_sa.go:55] duration metric: took 192.014074ms for default service account to be created ...
	I1007 10:49:24.104214   23621 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 10:49:24.299461   23621 request.go:632] Waited for 195.156179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:24.299513   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:24.299518   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:24.299525   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:24.299530   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:24.305308   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:49:24.311531   23621 system_pods.go:86] 24 kube-system pods found
	I1007 10:49:24.311559   23621 system_pods.go:89] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:49:24.311565   23621 system_pods.go:89] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:49:24.311569   23621 system_pods.go:89] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:49:24.311575   23621 system_pods.go:89] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:49:24.311579   23621 system_pods.go:89] "etcd-ha-406505-m03" [2c0079fb-51f1-423c-8b4c-893824342cd6] Running
	I1007 10:49:24.311583   23621 system_pods.go:89] "kindnet-28vpp" [c14e8bdf-ebc5-4349-adb4-6786cd15551d] Running
	I1007 10:49:24.311589   23621 system_pods.go:89] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:49:24.311593   23621 system_pods.go:89] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:49:24.311599   23621 system_pods.go:89] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:49:24.311602   23621 system_pods.go:89] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:49:24.311606   23621 system_pods.go:89] "kube-apiserver-ha-406505-m03" [8bc80684-cd9a-40b1-94e1-02cb77917c36] Running
	I1007 10:49:24.311611   23621 system_pods.go:89] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:49:24.311617   23621 system_pods.go:89] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:49:24.311620   23621 system_pods.go:89] "kube-controller-manager-ha-406505-m03" [ab97ec1a-fb7e-42a5-b77c-721ccf85db1d] Running
	I1007 10:49:24.311626   23621 system_pods.go:89] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:49:24.311629   23621 system_pods.go:89] "kube-proxy-c79zf" [2b12aaa5-9560-459b-a3bb-e45e73a6b663] Running
	I1007 10:49:24.311635   23621 system_pods.go:89] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:49:24.311638   23621 system_pods.go:89] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:49:24.311643   23621 system_pods.go:89] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:49:24.311646   23621 system_pods.go:89] "kube-scheduler-ha-406505-m03" [da8d486f-250a-4961-ac7c-b1435c52a3ca] Running
	I1007 10:49:24.311649   23621 system_pods.go:89] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:49:24.311652   23621 system_pods.go:89] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:49:24.311655   23621 system_pods.go:89] "kube-vip-ha-406505-m03" [a90a6084-73a3-476c-9729-1d8b45c6f3fc] Running
	I1007 10:49:24.311658   23621 system_pods.go:89] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:49:24.311664   23621 system_pods.go:126] duration metric: took 207.442478ms to wait for k8s-apps to be running ...
	I1007 10:49:24.311673   23621 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 10:49:24.311718   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:49:24.329372   23621 system_svc.go:56] duration metric: took 17.689597ms WaitForService to wait for kubelet
	I1007 10:49:24.329408   23621 kubeadm.go:582] duration metric: took 23.588563567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:49:24.329431   23621 node_conditions.go:102] verifying NodePressure condition ...
	I1007 10:49:24.498716   23621 request.go:632] Waited for 169.197079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes
	I1007 10:49:24.498772   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes
	I1007 10:49:24.498777   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:24.498785   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:24.498788   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:24.502487   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:24.503651   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:49:24.503669   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:49:24.503680   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:49:24.503684   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:49:24.503688   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:49:24.503691   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:49:24.503697   23621 node_conditions.go:105] duration metric: took 174.259877ms to run NodePressure ...
	I1007 10:49:24.503713   23621 start.go:241] waiting for startup goroutines ...
	I1007 10:49:24.503733   23621 start.go:255] writing updated cluster config ...
	I1007 10:49:24.504082   23621 ssh_runner.go:195] Run: rm -f paused
	I1007 10:49:24.554954   23621 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 10:49:24.557268   23621 out.go:177] * Done! kubectl is now configured to use "ha-406505" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.847669636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298400847645233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b1726a5-6f96-4a12-876e-5c5f86cad0b5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.848390570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf663ac9-8c41-466e-a11e-be38f3eb9fef name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.848567330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf663ac9-8c41-466e-a11e-be38f3eb9fef name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.848912394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf663ac9-8c41-466e-a11e-be38f3eb9fef name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.898654703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1ba59aa-fe0a-453f-983e-a20c15bac5e2 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.898789395Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1ba59aa-fe0a-453f-983e-a20c15bac5e2 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.900674724Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2234279d-5925-43c9-88f3-61a8327320fb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.901475449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298400901395288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2234279d-5925-43c9-88f3-61a8327320fb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.902251682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ba09ade-1629-4144-9262-86600eec80e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.902350496Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ba09ade-1629-4144-9262-86600eec80e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.902739365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ba09ade-1629-4144-9262-86600eec80e2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.948169815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14a3f1d9-3d80-4f56-bb78-0ab6d9617f38 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.948249995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14a3f1d9-3d80-4f56-bb78-0ab6d9617f38 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.949641292Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=922b356b-b8cd-4a85-ada3-c23d511ebbc5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.950038651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298400950018735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=922b356b-b8cd-4a85-ada3-c23d511ebbc5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.950568373Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe54437c-0574-4aa8-9609-5794cc181ab2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.950622831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe54437c-0574-4aa8-9609-5794cc181ab2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.951117079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe54437c-0574-4aa8-9609-5794cc181ab2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.995220894Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4721cb55-74a8-4f1d-bbc5-25952ae2f500 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.995297489Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4721cb55-74a8-4f1d-bbc5-25952ae2f500 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.997158486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=007f4d69-7d76-4c6d-82fe-0ddbd507bf4b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.997805429Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298400997776103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=007f4d69-7d76-4c6d-82fe-0ddbd507bf4b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.998602345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5f3acea-ab04-4129-ab2b-09fc7ec7c711 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.998658955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5f3acea-ab04-4129-ab2b-09fc7ec7c711 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:20 ha-406505 crio[660]: time="2024-10-07 10:53:20.998892224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5f3acea-ab04-4129-ab2b-09fc7ec7c711 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4d9a2a1043aa2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   77c3242ae96e0       busybox-7dff88458-tzgjx
	77cd2f018baff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ce1fc89e90c8e       storage-provisioner
	b0cc4a36e486c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   32fee1b9f25d3       coredns-7c65d6cfc9-xzc88
	0ebc4ee6afc90       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   6142c38866566       coredns-7c65d6cfc9-ghmwd
	4abb8ea931227       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   33e535c0eb67f       kindnet-pt74h
	99b7425285dcb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   f6d2bf974f666       kube-proxy-nlnhf
	79eb2653667b5       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   faf0d86acd1e3       kube-vip-ha-406505
	fa4965d1b169f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   77c273367dc31       kube-scheduler-ha-406505
	5b63558545dbd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   de56de352fe21       kube-apiserver-ha-406505
	11a16a81bf6bf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   b351c9fd7630d       etcd-ha-406505
	eb0b61d1fd920       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   c4fb1e79d2379       kube-controller-manager-ha-406505
	
	
	==> coredns [0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136] <==
	[INFO] 10.244.1.2:52141 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000229841s
	[INFO] 10.244.1.2:49387 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177541s
	[INFO] 10.244.1.2:51777 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003610459s
	[INFO] 10.244.1.2:53883 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000188749s
	[INFO] 10.244.2.2:56490 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126634s
	[INFO] 10.244.2.2:39507 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008519s
	[INFO] 10.244.2.2:51465 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085975s
	[INFO] 10.244.2.2:54662 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141674s
	[INFO] 10.244.0.4:60148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114521s
	[INFO] 10.244.0.4:60136 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061595s
	[INFO] 10.244.0.4:58172 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000046455s
	[INFO] 10.244.0.4:37188 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001182047s
	[INFO] 10.244.0.4:43590 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115472s
	[INFO] 10.244.0.4:58012 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033373s
	[INFO] 10.244.1.2:49885 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158136s
	[INFO] 10.244.1.2:37058 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108137s
	[INFO] 10.244.1.2:53254 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014209s
	[INFO] 10.244.2.2:48605 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000226971s
	[INFO] 10.244.0.4:56354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139347s
	[INFO] 10.244.0.4:53408 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091527s
	[INFO] 10.244.1.2:56944 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148755s
	[INFO] 10.244.1.2:35017 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000240968s
	[INFO] 10.244.1.2:60956 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156011s
	[INFO] 10.244.2.2:52452 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151278s
	[INFO] 10.244.0.4:37523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081767s
	
	
	==> coredns [b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12] <==
	[INFO] 10.244.2.2:48222 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000340345s
	[INFO] 10.244.2.2:43370 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001307969s
	[INFO] 10.244.0.4:43661 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000100802s
	[INFO] 10.244.0.4:58476 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001778301s
	[INFO] 10.244.1.2:33672 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201181s
	[INFO] 10.244.1.2:45107 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000305371s
	[INFO] 10.244.2.2:49200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000294988s
	[INFO] 10.244.2.2:49393 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001850366s
	[INFO] 10.244.2.2:48213 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001471137s
	[INFO] 10.244.2.2:60468 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152254s
	[INFO] 10.244.0.4:59551 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001687745s
	[INFO] 10.244.0.4:49859 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044844s
	[INFO] 10.244.1.2:53294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000358207s
	[INFO] 10.244.2.2:48456 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119873s
	[INFO] 10.244.2.2:52623 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000223935s
	[INFO] 10.244.2.2:35737 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161301s
	[INFO] 10.244.0.4:48948 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099818s
	[INFO] 10.244.0.4:38842 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000194312s
	[INFO] 10.244.1.2:52889 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000213247s
	[INFO] 10.244.2.2:54256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000280783s
	[INFO] 10.244.2.2:50232 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000318899s
	[INFO] 10.244.2.2:39214 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147924s
	[INFO] 10.244.0.4:53521 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112358s
	[INFO] 10.244.0.4:49217 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161935s
	[INFO] 10.244.0.4:32867 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109582s
	
	
	==> describe nodes <==
	Name:               ha-406505
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T10_46_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:46:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:53:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-406505
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f87dab03082f46978f270a1d9209ed7f
	  System UUID:                f87dab03-082f-4697-8f27-0a1d9209ed7f
	  Boot ID:                    c90db251-8dbe-47f3-98dd-72c0b5cbd489
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tzgjx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 coredns-7c65d6cfc9-ghmwd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m34s
	  kube-system                 coredns-7c65d6cfc9-xzc88             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m34s
	  kube-system                 etcd-ha-406505                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m39s
	  kube-system                 kindnet-pt74h                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m35s
	  kube-system                 kube-apiserver-ha-406505             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-controller-manager-ha-406505    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-proxy-nlnhf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-scheduler-ha-406505             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-vip-ha-406505                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m33s  kube-proxy       
	  Normal  Starting                 6m39s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m39s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m39s  kubelet          Node ha-406505 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s  kubelet          Node ha-406505 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s  kubelet          Node ha-406505 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m35s  node-controller  Node ha-406505 event: Registered Node ha-406505 in Controller
	  Normal  NodeReady                6m23s  kubelet          Node ha-406505 status is now: NodeReady
	  Normal  RegisteredNode           5m35s  node-controller  Node ha-406505 event: Registered Node ha-406505 in Controller
	  Normal  RegisteredNode           4m16s  node-controller  Node ha-406505 event: Registered Node ha-406505 in Controller
	
	
	Name:               ha-406505-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_47_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:47:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:50:41 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.37
	  Hostname:    ha-406505-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad0b7870a2a54204abf112edd9c072ce
	  System UUID:                ad0b7870-a2a5-4204-abf1-12edd9c072ce
	  Boot ID:                    0b4627e5-d7a2-40a3-9d63-8cae53190740
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bjz2q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-ha-406505-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m41s
	  kube-system                 kindnet-h8fh4                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m43s
	  kube-system                 kube-apiserver-ha-406505-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-controller-manager-ha-406505-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-proxy-6ng4z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-scheduler-ha-406505-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 kube-vip-ha-406505-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m38s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m43s (x8 over 5m43s)  kubelet          Node ha-406505-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m43s (x8 over 5m43s)  kubelet          Node ha-406505-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m43s (x7 over 5m43s)  kubelet          Node ha-406505-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m40s                  node-controller  Node ha-406505-m02 event: Registered Node ha-406505-m02 in Controller
	  Normal  RegisteredNode           5m35s                  node-controller  Node ha-406505-m02 event: Registered Node ha-406505-m02 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-406505-m02 event: Registered Node ha-406505-m02 in Controller
	  Normal  NodeNotReady             116s                   node-controller  Node ha-406505-m02 status is now: NodeNotReady
	
	
	Name:               ha-406505-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_49_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:48:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:53:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:48:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:48:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:48:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:49:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-406505-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75575a7b8eb34e0589ff800419073c6f
	  System UUID:                75575a7b-8eb3-4e05-89ff-800419073c6f
	  Boot ID:                    797c7f20-765b-4e29-a483-d65c033a2625
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ktkg9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-ha-406505-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m23s
	  kube-system                 kindnet-28vpp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m25s
	  kube-system                 kube-apiserver-ha-406505-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-controller-manager-ha-406505-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-proxy-c79zf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-scheduler-ha-406505-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-vip-ha-406505-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m19s                  kube-proxy       
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-406505-m03 event: Registered Node ha-406505-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m25s (x8 over 4m25s)  kubelet          Node ha-406505-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s (x8 over 4m25s)  kubelet          Node ha-406505-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s (x7 over 4m25s)  kubelet          Node ha-406505-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-406505-m03 event: Registered Node ha-406505-m03 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-406505-m03 event: Registered Node ha-406505-m03 in Controller
	
	
	Name:               ha-406505-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_50_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:50:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:53:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-406505-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eb4bdac85cb424a99b5076fbfc659b6
	  System UUID:                9eb4bdac-85cb-424a-99b5-076fbfc659b6
	  Boot ID:                    6e48a403-8d50-4a51-beab-d3d8e1e29c60
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cqsll       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m16s
	  kube-system                 kube-proxy-8n5g6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m10s                  kube-proxy       
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-406505-m04 event: Registered Node ha-406505-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m16s (x2 over 3m16s)  kubelet          Node ha-406505-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m16s (x2 over 3m16s)  kubelet          Node ha-406505-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m16s (x2 over 3m16s)  kubelet          Node ha-406505-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-406505-m04 event: Registered Node ha-406505-m04 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-406505-m04 event: Registered Node ha-406505-m04 in Controller
	  Normal  NodeReady                2m55s                  kubelet          Node ha-406505-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 10:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051371] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040405] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.858113] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.711350] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.602582] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.722628] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.057663] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056433] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.169114] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.137291] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.300660] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.116084] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.680655] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.069150] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.087227] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.089104] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.196698] kauditd_printk_skb: 31 callbacks suppressed
	[ +11.900338] kauditd_printk_skb: 28 callbacks suppressed
	[Oct 7 10:47] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b] <==
	{"level":"warn","ts":"2024-10-07T10:53:21.251514Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.284807Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.292346Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.296192Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.304838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.309023Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.311689Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.324181Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.327739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.331328Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.337284Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.344125Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.350836Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.351864Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.354705Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.358528Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.364304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.370593Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.377199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.380625Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.383673Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.387487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.394798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.423824Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:21.451854Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:53:21 up 7 min,  0 users,  load average: 0.84, 0.61, 0.28
	Linux ha-406505 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec] <==
	I1007 10:52:48.825838       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:52:58.833626       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I1007 10:52:58.833675       1 main.go:299] handling current node
	I1007 10:52:58.833690       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I1007 10:52:58.833695       1 main.go:322] Node ha-406505-m02 has CIDR [10.244.1.0/24] 
	I1007 10:52:58.833864       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I1007 10:52:58.833902       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:52:58.833984       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I1007 10:52:58.834007       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	I1007 10:53:08.831971       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I1007 10:53:08.832046       1 main.go:322] Node ha-406505-m02 has CIDR [10.244.1.0/24] 
	I1007 10:53:08.832167       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I1007 10:53:08.832188       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:53:08.832260       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I1007 10:53:08.832280       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	I1007 10:53:08.832356       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I1007 10:53:08.832375       1 main.go:299] handling current node
	I1007 10:53:18.831206       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I1007 10:53:18.831277       1 main.go:299] handling current node
	I1007 10:53:18.831346       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I1007 10:53:18.831353       1 main.go:322] Node ha-406505-m02 has CIDR [10.244.1.0/24] 
	I1007 10:53:18.831556       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I1007 10:53:18.831582       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:53:18.831637       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I1007 10:53:18.831656       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46] <==
	W1007 10:46:41.183638       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.250]
	I1007 10:46:41.185270       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 10:46:41.191014       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 10:46:41.276253       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1007 10:46:42.491094       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1007 10:46:42.518362       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1007 10:46:42.533655       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 10:46:46.678876       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1007 10:46:46.902258       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1007 10:49:31.707971       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59314: use of closed network connection
	E1007 10:49:31.903823       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59340: use of closed network connection
	E1007 10:49:32.086294       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59358: use of closed network connection
	E1007 10:49:32.297595       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59380: use of closed network connection
	E1007 10:49:32.498258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59404: use of closed network connection
	E1007 10:49:32.676693       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59420: use of closed network connection
	E1007 10:49:32.859242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59440: use of closed network connection
	E1007 10:49:33.057965       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59468: use of closed network connection
	E1007 10:49:33.240103       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59478: use of closed network connection
	E1007 10:49:33.559788       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59494: use of closed network connection
	E1007 10:49:33.755853       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59504: use of closed network connection
	E1007 10:49:33.944169       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59516: use of closed network connection
	E1007 10:49:34.136074       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59544: use of closed network connection
	E1007 10:49:34.332211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59568: use of closed network connection
	E1007 10:49:34.527795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59588: use of closed network connection
	W1007 10:51:01.196929       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.250]
	
	
	==> kube-controller-manager [eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750] <==
	I1007 10:50:05.605601       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406505-m04\" does not exist"
	I1007 10:50:05.651707       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406505-m04" podCIDRs=["10.244.3.0/24"]
	I1007 10:50:05.651878       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:05.652095       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:05.866588       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.004135       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.156174       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.156822       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406505-m04"
	I1007 10:50:06.254557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.312035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.987679       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:07.073914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:15.971952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:26.980381       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406505-m04"
	I1007 10:50:26.982232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:27.002591       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:27.205853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:36.177995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:51:25.956486       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	I1007 10:51:25.956910       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406505-m04"
	I1007 10:51:25.977091       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	I1007 10:51:26.074899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.887988ms"
	I1007 10:51:26.075025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.368µs"
	I1007 10:51:26.200250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	I1007 10:51:31.167674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	
	
	==> kube-proxy [99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 10:46:47.887571       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 10:46:47.911134       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.250"]
	E1007 10:46:47.911278       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 10:46:47.980015       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 10:46:47.980045       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 10:46:47.980074       1 server_linux.go:169] "Using iptables Proxier"
	I1007 10:46:47.983497       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 10:46:47.984580       1 server.go:483] "Version info" version="v1.31.1"
	I1007 10:46:47.984594       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 10:46:47.987677       1 config.go:199] "Starting service config controller"
	I1007 10:46:47.988455       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 10:46:47.988871       1 config.go:105] "Starting endpoint slice config controller"
	I1007 10:46:47.988960       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 10:46:47.990124       1 config.go:328] "Starting node config controller"
	I1007 10:46:47.990263       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 10:46:48.088926       1 shared_informer.go:320] Caches are synced for service config
	I1007 10:46:48.090118       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 10:46:48.090928       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887] <==
	W1007 10:46:40.575139       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 10:46:40.575275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.704893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 10:46:40.704946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.706026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 10:46:40.706071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.735457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 10:46:40.735594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.745564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 10:46:40.745701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.956352       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 10:46:40.956445       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 10:46:43.102324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1007 10:50:05.717930       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cqsll\": pod kindnet-cqsll is already assigned to node \"ha-406505-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-cqsll" node="ha-406505-m04"
	E1007 10:50:05.719300       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 62093c84-d91b-44ed-a605-198bd057ee89(kube-system/kindnet-cqsll) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-cqsll"
	E1007 10:50:05.719513       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cqsll\": pod kindnet-cqsll is already assigned to node \"ha-406505-m04\"" pod="kube-system/kindnet-cqsll"
	I1007 10:50:05.719601       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cqsll" node="ha-406505-m04"
	E1007 10:50:05.720316       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8n5g6\": pod kube-proxy-8n5g6 is already assigned to node \"ha-406505-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8n5g6" node="ha-406505-m04"
	E1007 10:50:05.724984       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod df46b5c0-261e-4455-bda8-d73ef0b24faa(kube-system/kube-proxy-8n5g6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8n5g6"
	E1007 10:50:05.725159       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8n5g6\": pod kube-proxy-8n5g6 is already assigned to node \"ha-406505-m04\"" pod="kube-system/kube-proxy-8n5g6"
	I1007 10:50:05.725258       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8n5g6" node="ha-406505-m04"
	E1007 10:50:05.734867       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-957n4\": pod kindnet-957n4 is already assigned to node \"ha-406505-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-957n4" node="ha-406505-m04"
	E1007 10:50:05.736396       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9b6e172b-6f7a-48e1-8a89-60f70e5b77f6(kube-system/kindnet-957n4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-957n4"
	E1007 10:50:05.736761       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-957n4\": pod kindnet-957n4 is already assigned to node \"ha-406505-m04\"" pod="kube-system/kindnet-957n4"
	I1007 10:50:05.736855       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-957n4" node="ha-406505-m04"
	
	
	==> kubelet <==
	Oct 07 10:51:42 ha-406505 kubelet[1306]: E1007 10:51:42.610847    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298302610335333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:42 ha-406505 kubelet[1306]: E1007 10:51:42.610884    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298302610335333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:52 ha-406505 kubelet[1306]: E1007 10:51:52.612666    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298312612090878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:52 ha-406505 kubelet[1306]: E1007 10:51:52.612749    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298312612090878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:02 ha-406505 kubelet[1306]: E1007 10:52:02.614917    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298322614471502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:02 ha-406505 kubelet[1306]: E1007 10:52:02.615287    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298322614471502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:12 ha-406505 kubelet[1306]: E1007 10:52:12.617387    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298332617012708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:12 ha-406505 kubelet[1306]: E1007 10:52:12.617780    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298332617012708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:22 ha-406505 kubelet[1306]: E1007 10:52:22.620172    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298342619770777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:22 ha-406505 kubelet[1306]: E1007 10:52:22.620593    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298342619770777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:32 ha-406505 kubelet[1306]: E1007 10:52:32.622744    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298352622225858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:32 ha-406505 kubelet[1306]: E1007 10:52:32.622792    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298352622225858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:42 ha-406505 kubelet[1306]: E1007 10:52:42.472254    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 10:52:42 ha-406505 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 10:52:42 ha-406505 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 10:52:42 ha-406505 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 10:52:42 ha-406505 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 10:52:42 ha-406505 kubelet[1306]: E1007 10:52:42.624989    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298362624467928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:42 ha-406505 kubelet[1306]: E1007 10:52:42.625274    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298362624467928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:52 ha-406505 kubelet[1306]: E1007 10:52:52.627616    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298372626959180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:52 ha-406505 kubelet[1306]: E1007 10:52:52.627689    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298372626959180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:02 ha-406505 kubelet[1306]: E1007 10:53:02.630238    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298382629746151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:02 ha-406505 kubelet[1306]: E1007 10:53:02.630676    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298382629746151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:12 ha-406505 kubelet[1306]: E1007 10:53:12.633509    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298392632773901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:12 ha-406505 kubelet[1306]: E1007 10:53:12.633800    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298392632773901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406505 -n ha-406505
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406505 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.057265586s)
ha_test.go:309: expected profile "ha-406505" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-406505\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-406505\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-406505\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.250\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.37\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.102\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.2\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"met
allb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":26
2144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406505 -n ha-406505
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406505 logs -n 25: (1.413527125s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505:/home/docker/cp-test_ha-406505-m03_ha-406505.txt                       |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505 sudo cat                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505.txt                                 |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m04 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp testdata/cp-test.txt                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505:/home/docker/cp-test_ha-406505-m04_ha-406505.txt                       |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505 sudo cat                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505.txt                                 |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03:/home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m03 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-406505 node stop m02 -v=7                                                     | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-406505 node start m02 -v=7                                                    | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:46:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:46:00.685163   23621 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:46:00.685349   23621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:46:00.685361   23621 out.go:358] Setting ErrFile to fd 2...
	I1007 10:46:00.685369   23621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:46:00.685896   23621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:46:00.686526   23621 out.go:352] Setting JSON to false
	I1007 10:46:00.687357   23621 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1655,"bootTime":1728296306,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:46:00.687449   23621 start.go:139] virtualization: kvm guest
	I1007 10:46:00.689739   23621 out.go:177] * [ha-406505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:46:00.691129   23621 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:46:00.691156   23621 notify.go:220] Checking for updates...
	I1007 10:46:00.693697   23621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:46:00.695072   23621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:46:00.696501   23621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:00.697726   23621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:46:00.698926   23621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:46:00.700212   23621 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:46:00.737316   23621 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 10:46:00.738839   23621 start.go:297] selected driver: kvm2
	I1007 10:46:00.738857   23621 start.go:901] validating driver "kvm2" against <nil>
	I1007 10:46:00.738870   23621 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:46:00.739587   23621 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:46:00.739673   23621 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:46:00.755165   23621 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:46:00.755211   23621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 10:46:00.755442   23621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:46:00.755469   23621 cni.go:84] Creating CNI manager for ""
	I1007 10:46:00.755509   23621 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 10:46:00.755520   23621 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 10:46:00.755574   23621 start.go:340] cluster config:
	{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1007 10:46:00.755686   23621 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:46:00.757513   23621 out.go:177] * Starting "ha-406505" primary control-plane node in "ha-406505" cluster
	I1007 10:46:00.758765   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:46:00.758805   23621 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 10:46:00.758823   23621 cache.go:56] Caching tarball of preloaded images
	I1007 10:46:00.758896   23621 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:46:00.758906   23621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:46:00.759224   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:00.759245   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json: {Name:mk9b03e101af877bc71d822d951dd0373d9dda34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:00.759379   23621 start.go:360] acquireMachinesLock for ha-406505: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:46:00.759405   23621 start.go:364] duration metric: took 14.549µs to acquireMachinesLock for "ha-406505"
	I1007 10:46:00.759421   23621 start.go:93] Provisioning new machine with config: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:46:00.759479   23621 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 10:46:00.761273   23621 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 10:46:00.761420   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:00.761466   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:00.775977   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35573
	I1007 10:46:00.776393   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:00.776945   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:00.776968   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:00.777275   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:00.777446   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:00.777589   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:00.777737   23621 start.go:159] libmachine.API.Create for "ha-406505" (driver="kvm2")
	I1007 10:46:00.777767   23621 client.go:168] LocalClient.Create starting
	I1007 10:46:00.777806   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:46:00.777846   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:00.777867   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:00.777925   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:46:00.777949   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:00.777966   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:00.777989   23621 main.go:141] libmachine: Running pre-create checks...
	I1007 10:46:00.778000   23621 main.go:141] libmachine: (ha-406505) Calling .PreCreateCheck
	I1007 10:46:00.778317   23621 main.go:141] libmachine: (ha-406505) Calling .GetConfigRaw
	I1007 10:46:00.778644   23621 main.go:141] libmachine: Creating machine...
	I1007 10:46:00.778656   23621 main.go:141] libmachine: (ha-406505) Calling .Create
	I1007 10:46:00.778771   23621 main.go:141] libmachine: (ha-406505) Creating KVM machine...
	I1007 10:46:00.779972   23621 main.go:141] libmachine: (ha-406505) DBG | found existing default KVM network
	I1007 10:46:00.780650   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:00.780522   23644 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a50}
	I1007 10:46:00.780693   23621 main.go:141] libmachine: (ha-406505) DBG | created network xml: 
	I1007 10:46:00.780713   23621 main.go:141] libmachine: (ha-406505) DBG | <network>
	I1007 10:46:00.780722   23621 main.go:141] libmachine: (ha-406505) DBG |   <name>mk-ha-406505</name>
	I1007 10:46:00.780732   23621 main.go:141] libmachine: (ha-406505) DBG |   <dns enable='no'/>
	I1007 10:46:00.780741   23621 main.go:141] libmachine: (ha-406505) DBG |   
	I1007 10:46:00.780752   23621 main.go:141] libmachine: (ha-406505) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 10:46:00.780763   23621 main.go:141] libmachine: (ha-406505) DBG |     <dhcp>
	I1007 10:46:00.780774   23621 main.go:141] libmachine: (ha-406505) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 10:46:00.780793   23621 main.go:141] libmachine: (ha-406505) DBG |     </dhcp>
	I1007 10:46:00.780806   23621 main.go:141] libmachine: (ha-406505) DBG |   </ip>
	I1007 10:46:00.780813   23621 main.go:141] libmachine: (ha-406505) DBG |   
	I1007 10:46:00.780820   23621 main.go:141] libmachine: (ha-406505) DBG | </network>
	I1007 10:46:00.780827   23621 main.go:141] libmachine: (ha-406505) DBG | 
	I1007 10:46:00.785975   23621 main.go:141] libmachine: (ha-406505) DBG | trying to create private KVM network mk-ha-406505 192.168.39.0/24...
	I1007 10:46:00.849882   23621 main.go:141] libmachine: (ha-406505) DBG | private KVM network mk-ha-406505 192.168.39.0/24 created
	I1007 10:46:00.849911   23621 main.go:141] libmachine: (ha-406505) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505 ...
	I1007 10:46:00.849973   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:00.849860   23644 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:00.850002   23621 main.go:141] libmachine: (ha-406505) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:46:00.850027   23621 main.go:141] libmachine: (ha-406505) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:46:01.096727   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:01.096588   23644 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa...
	I1007 10:46:01.205683   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:01.205510   23644 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/ha-406505.rawdisk...
	I1007 10:46:01.205717   23621 main.go:141] libmachine: (ha-406505) DBG | Writing magic tar header
	I1007 10:46:01.205736   23621 main.go:141] libmachine: (ha-406505) DBG | Writing SSH key tar header
	I1007 10:46:01.205745   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:01.205639   23644 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505 ...
	I1007 10:46:01.205758   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505
	I1007 10:46:01.205765   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:46:01.205774   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505 (perms=drwx------)
	I1007 10:46:01.205782   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:46:01.205789   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:46:01.205799   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:01.205809   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:46:01.205820   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:46:01.205825   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:46:01.205832   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:46:01.205838   23621 main.go:141] libmachine: (ha-406505) DBG | Checking permissions on dir: /home
	I1007 10:46:01.205845   23621 main.go:141] libmachine: (ha-406505) DBG | Skipping /home - not owner
	I1007 10:46:01.205854   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:46:01.205860   23621 main.go:141] libmachine: (ha-406505) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:46:01.205868   23621 main.go:141] libmachine: (ha-406505) Creating domain...
	I1007 10:46:01.207028   23621 main.go:141] libmachine: (ha-406505) define libvirt domain using xml: 
	I1007 10:46:01.207069   23621 main.go:141] libmachine: (ha-406505) <domain type='kvm'>
	I1007 10:46:01.207077   23621 main.go:141] libmachine: (ha-406505)   <name>ha-406505</name>
	I1007 10:46:01.207082   23621 main.go:141] libmachine: (ha-406505)   <memory unit='MiB'>2200</memory>
	I1007 10:46:01.207087   23621 main.go:141] libmachine: (ha-406505)   <vcpu>2</vcpu>
	I1007 10:46:01.207093   23621 main.go:141] libmachine: (ha-406505)   <features>
	I1007 10:46:01.207097   23621 main.go:141] libmachine: (ha-406505)     <acpi/>
	I1007 10:46:01.207103   23621 main.go:141] libmachine: (ha-406505)     <apic/>
	I1007 10:46:01.207108   23621 main.go:141] libmachine: (ha-406505)     <pae/>
	I1007 10:46:01.207115   23621 main.go:141] libmachine: (ha-406505)     
	I1007 10:46:01.207120   23621 main.go:141] libmachine: (ha-406505)   </features>
	I1007 10:46:01.207124   23621 main.go:141] libmachine: (ha-406505)   <cpu mode='host-passthrough'>
	I1007 10:46:01.207129   23621 main.go:141] libmachine: (ha-406505)   
	I1007 10:46:01.207133   23621 main.go:141] libmachine: (ha-406505)   </cpu>
	I1007 10:46:01.207137   23621 main.go:141] libmachine: (ha-406505)   <os>
	I1007 10:46:01.207141   23621 main.go:141] libmachine: (ha-406505)     <type>hvm</type>
	I1007 10:46:01.207145   23621 main.go:141] libmachine: (ha-406505)     <boot dev='cdrom'/>
	I1007 10:46:01.207150   23621 main.go:141] libmachine: (ha-406505)     <boot dev='hd'/>
	I1007 10:46:01.207154   23621 main.go:141] libmachine: (ha-406505)     <bootmenu enable='no'/>
	I1007 10:46:01.207161   23621 main.go:141] libmachine: (ha-406505)   </os>
	I1007 10:46:01.207186   23621 main.go:141] libmachine: (ha-406505)   <devices>
	I1007 10:46:01.207206   23621 main.go:141] libmachine: (ha-406505)     <disk type='file' device='cdrom'>
	I1007 10:46:01.207220   23621 main.go:141] libmachine: (ha-406505)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/boot2docker.iso'/>
	I1007 10:46:01.207236   23621 main.go:141] libmachine: (ha-406505)       <target dev='hdc' bus='scsi'/>
	I1007 10:46:01.207250   23621 main.go:141] libmachine: (ha-406505)       <readonly/>
	I1007 10:46:01.207259   23621 main.go:141] libmachine: (ha-406505)     </disk>
	I1007 10:46:01.207281   23621 main.go:141] libmachine: (ha-406505)     <disk type='file' device='disk'>
	I1007 10:46:01.207300   23621 main.go:141] libmachine: (ha-406505)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:46:01.207324   23621 main.go:141] libmachine: (ha-406505)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/ha-406505.rawdisk'/>
	I1007 10:46:01.207335   23621 main.go:141] libmachine: (ha-406505)       <target dev='hda' bus='virtio'/>
	I1007 10:46:01.207342   23621 main.go:141] libmachine: (ha-406505)     </disk>
	I1007 10:46:01.207348   23621 main.go:141] libmachine: (ha-406505)     <interface type='network'>
	I1007 10:46:01.207354   23621 main.go:141] libmachine: (ha-406505)       <source network='mk-ha-406505'/>
	I1007 10:46:01.207361   23621 main.go:141] libmachine: (ha-406505)       <model type='virtio'/>
	I1007 10:46:01.207369   23621 main.go:141] libmachine: (ha-406505)     </interface>
	I1007 10:46:01.207381   23621 main.go:141] libmachine: (ha-406505)     <interface type='network'>
	I1007 10:46:01.207395   23621 main.go:141] libmachine: (ha-406505)       <source network='default'/>
	I1007 10:46:01.207406   23621 main.go:141] libmachine: (ha-406505)       <model type='virtio'/>
	I1007 10:46:01.207415   23621 main.go:141] libmachine: (ha-406505)     </interface>
	I1007 10:46:01.207422   23621 main.go:141] libmachine: (ha-406505)     <serial type='pty'>
	I1007 10:46:01.207432   23621 main.go:141] libmachine: (ha-406505)       <target port='0'/>
	I1007 10:46:01.207442   23621 main.go:141] libmachine: (ha-406505)     </serial>
	I1007 10:46:01.207469   23621 main.go:141] libmachine: (ha-406505)     <console type='pty'>
	I1007 10:46:01.207491   23621 main.go:141] libmachine: (ha-406505)       <target type='serial' port='0'/>
	I1007 10:46:01.207513   23621 main.go:141] libmachine: (ha-406505)     </console>
	I1007 10:46:01.207526   23621 main.go:141] libmachine: (ha-406505)     <rng model='virtio'>
	I1007 10:46:01.207539   23621 main.go:141] libmachine: (ha-406505)       <backend model='random'>/dev/random</backend>
	I1007 10:46:01.207548   23621 main.go:141] libmachine: (ha-406505)     </rng>
	I1007 10:46:01.207554   23621 main.go:141] libmachine: (ha-406505)     
	I1007 10:46:01.207563   23621 main.go:141] libmachine: (ha-406505)     
	I1007 10:46:01.207572   23621 main.go:141] libmachine: (ha-406505)   </devices>
	I1007 10:46:01.207587   23621 main.go:141] libmachine: (ha-406505) </domain>
	I1007 10:46:01.207603   23621 main.go:141] libmachine: (ha-406505) 
	I1007 10:46:01.211673   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:76:8f:a7 in network default
	I1007 10:46:01.212309   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:01.212331   23621 main.go:141] libmachine: (ha-406505) Ensuring networks are active...
	I1007 10:46:01.212999   23621 main.go:141] libmachine: (ha-406505) Ensuring network default is active
	I1007 10:46:01.213295   23621 main.go:141] libmachine: (ha-406505) Ensuring network mk-ha-406505 is active
	I1007 10:46:01.213746   23621 main.go:141] libmachine: (ha-406505) Getting domain xml...
	I1007 10:46:01.214325   23621 main.go:141] libmachine: (ha-406505) Creating domain...
	I1007 10:46:02.421940   23621 main.go:141] libmachine: (ha-406505) Waiting to get IP...
	I1007 10:46:02.422559   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:02.422963   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:02.423013   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:02.422950   23644 retry.go:31] will retry after 195.328474ms: waiting for machine to come up
	I1007 10:46:02.620556   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:02.621120   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:02.621158   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:02.621075   23644 retry.go:31] will retry after 387.449002ms: waiting for machine to come up
	I1007 10:46:03.009575   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:03.010111   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:03.010135   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:03.010073   23644 retry.go:31] will retry after 404.721004ms: waiting for machine to come up
	I1007 10:46:03.416746   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:03.417186   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:03.417213   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:03.417138   23644 retry.go:31] will retry after 372.059443ms: waiting for machine to come up
	I1007 10:46:03.790603   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:03.791114   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:03.791143   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:03.791071   23644 retry.go:31] will retry after 494.767467ms: waiting for machine to come up
	I1007 10:46:04.287716   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:04.288192   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:04.288211   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:04.288147   23644 retry.go:31] will retry after 903.556325ms: waiting for machine to come up
	I1007 10:46:05.193010   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:05.193532   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:05.193599   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:05.193453   23644 retry.go:31] will retry after 1.025768675s: waiting for machine to come up
	I1007 10:46:06.220323   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:06.220836   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:06.220866   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:06.220776   23644 retry.go:31] will retry after 1.100294717s: waiting for machine to come up
	I1007 10:46:07.323044   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:07.323554   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:07.323582   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:07.323505   23644 retry.go:31] will retry after 1.146070621s: waiting for machine to come up
	I1007 10:46:08.470888   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:08.471336   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:08.471361   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:08.471279   23644 retry.go:31] will retry after 2.296444266s: waiting for machine to come up
	I1007 10:46:10.768938   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:10.769285   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:10.769343   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:10.769271   23644 retry.go:31] will retry after 2.239094894s: waiting for machine to come up
	I1007 10:46:13.010328   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:13.010763   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:13.010789   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:13.010721   23644 retry.go:31] will retry after 3.13857084s: waiting for machine to come up
	I1007 10:46:16.150462   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:16.150858   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:16.150885   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:16.150808   23644 retry.go:31] will retry after 3.125257266s: waiting for machine to come up
	I1007 10:46:19.280079   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:19.280531   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find current IP address of domain ha-406505 in network mk-ha-406505
	I1007 10:46:19.280561   23621 main.go:141] libmachine: (ha-406505) DBG | I1007 10:46:19.280474   23644 retry.go:31] will retry after 5.119838312s: waiting for machine to come up
	I1007 10:46:24.405645   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.406055   23621 main.go:141] libmachine: (ha-406505) Found IP for machine: 192.168.39.250
	I1007 10:46:24.406093   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has current primary IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.406101   23621 main.go:141] libmachine: (ha-406505) Reserving static IP address...
	I1007 10:46:24.406506   23621 main.go:141] libmachine: (ha-406505) DBG | unable to find host DHCP lease matching {name: "ha-406505", mac: "52:54:00:1d:e2:d7", ip: "192.168.39.250"} in network mk-ha-406505
	I1007 10:46:24.482533   23621 main.go:141] libmachine: (ha-406505) DBG | Getting to WaitForSSH function...
	I1007 10:46:24.482567   23621 main.go:141] libmachine: (ha-406505) Reserved static IP address: 192.168.39.250
	I1007 10:46:24.482583   23621 main.go:141] libmachine: (ha-406505) Waiting for SSH to be available...
	I1007 10:46:24.485308   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.485711   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.485764   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.485909   23621 main.go:141] libmachine: (ha-406505) DBG | Using SSH client type: external
	I1007 10:46:24.485935   23621 main.go:141] libmachine: (ha-406505) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa (-rw-------)
	I1007 10:46:24.485971   23621 main.go:141] libmachine: (ha-406505) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:46:24.485988   23621 main.go:141] libmachine: (ha-406505) DBG | About to run SSH command:
	I1007 10:46:24.486003   23621 main.go:141] libmachine: (ha-406505) DBG | exit 0
	I1007 10:46:24.612334   23621 main.go:141] libmachine: (ha-406505) DBG | SSH cmd err, output: <nil>: 
	I1007 10:46:24.612631   23621 main.go:141] libmachine: (ha-406505) KVM machine creation complete!
	I1007 10:46:24.613069   23621 main.go:141] libmachine: (ha-406505) Calling .GetConfigRaw
	I1007 10:46:24.613769   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:24.614010   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:24.614210   23621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:46:24.614233   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:24.615544   23621 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:46:24.615563   23621 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:46:24.615570   23621 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:46:24.615577   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.617899   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.618287   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.618310   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.618494   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.618666   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.618809   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.618921   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.619056   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.619306   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.619320   23621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:46:24.727419   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:46:24.727448   23621 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:46:24.727458   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.730240   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.730602   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.730629   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.730740   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.730937   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.731096   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.731252   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.731417   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.731578   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.731587   23621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:46:24.845378   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:46:24.845478   23621 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:46:24.845490   23621 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:46:24.845498   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:24.845780   23621 buildroot.go:166] provisioning hostname "ha-406505"
	I1007 10:46:24.845810   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:24.846017   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.849059   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.849533   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.849565   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.849690   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.849892   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.850056   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.850226   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.850372   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.850530   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.850541   23621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505 && echo "ha-406505" | sudo tee /etc/hostname
	I1007 10:46:24.974484   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:46:24.974507   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:24.977334   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.977841   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:24.977876   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:24.978053   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:24.978231   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.978390   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:24.978528   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:24.978725   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:24.978910   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:24.978926   23621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:46:25.097736   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:46:25.097768   23621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:46:25.097810   23621 buildroot.go:174] setting up certificates
	I1007 10:46:25.097819   23621 provision.go:84] configureAuth start
	I1007 10:46:25.097832   23621 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:46:25.098143   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:25.100773   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.101119   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.101156   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.101261   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.103487   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.103793   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.103821   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.103966   23621 provision.go:143] copyHostCerts
	I1007 10:46:25.104016   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:46:25.104068   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:46:25.104102   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:46:25.104302   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:46:25.104436   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:46:25.104469   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:46:25.104478   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:46:25.104534   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:46:25.104606   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:46:25.104633   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:46:25.104641   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:46:25.104691   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:46:25.104770   23621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505 san=[127.0.0.1 192.168.39.250 ha-406505 localhost minikube]
	I1007 10:46:25.393470   23621 provision.go:177] copyRemoteCerts
	I1007 10:46:25.393548   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:46:25.393578   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.396327   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.396617   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.396642   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.396839   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.397030   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.397230   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.397379   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:25.482559   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:46:25.482632   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1007 10:46:25.508425   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:46:25.508519   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:46:25.534913   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:46:25.534986   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:46:25.560790   23621 provision.go:87] duration metric: took 462.953383ms to configureAuth
	I1007 10:46:25.560817   23621 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:46:25.560982   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:46:25.561053   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.563730   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.564168   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.564201   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.564402   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.564589   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.564760   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.564923   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.565085   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:25.565253   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:25.565272   23621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:46:25.800362   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:46:25.800389   23621 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:46:25.800397   23621 main.go:141] libmachine: (ha-406505) Calling .GetURL
	I1007 10:46:25.801606   23621 main.go:141] libmachine: (ha-406505) DBG | Using libvirt version 6000000
	I1007 10:46:25.803904   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.804248   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.804273   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.804397   23621 main.go:141] libmachine: Docker is up and running!
	I1007 10:46:25.804414   23621 main.go:141] libmachine: Reticulating splines...
	I1007 10:46:25.804421   23621 client.go:171] duration metric: took 25.026640958s to LocalClient.Create
	I1007 10:46:25.804457   23621 start.go:167] duration metric: took 25.026720726s to libmachine.API.Create "ha-406505"
	I1007 10:46:25.804469   23621 start.go:293] postStartSetup for "ha-406505" (driver="kvm2")
	I1007 10:46:25.804483   23621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:46:25.804519   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:25.804801   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:46:25.804822   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.806847   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.807242   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.807267   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.807402   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.807601   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.807734   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.807837   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:25.896212   23621 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:46:25.901311   23621 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:46:25.901340   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:46:25.901403   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:46:25.901507   23621 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:46:25.901521   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:46:25.901647   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:46:25.912163   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:46:25.940558   23621 start.go:296] duration metric: took 136.073342ms for postStartSetup
	I1007 10:46:25.940602   23621 main.go:141] libmachine: (ha-406505) Calling .GetConfigRaw
	I1007 10:46:25.941179   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:25.943928   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.944270   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.944295   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.944594   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:25.944766   23621 start.go:128] duration metric: took 25.185278256s to createHost
	I1007 10:46:25.944788   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:25.946920   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.947236   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:25.947263   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:25.947390   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:25.947554   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.947698   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:25.947796   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:25.947917   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:46:25.948107   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:46:25.948122   23621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:46:26.057285   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728297986.034090654
	
	I1007 10:46:26.057320   23621 fix.go:216] guest clock: 1728297986.034090654
	I1007 10:46:26.057332   23621 fix.go:229] Guest: 2024-10-07 10:46:26.034090654 +0000 UTC Remote: 2024-10-07 10:46:25.944777719 +0000 UTC m=+25.297917279 (delta=89.312935ms)
	I1007 10:46:26.057360   23621 fix.go:200] guest clock delta is within tolerance: 89.312935ms
	I1007 10:46:26.057368   23621 start.go:83] releasing machines lock for "ha-406505", held for 25.297953369s
	I1007 10:46:26.057394   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.057664   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:26.060710   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.061183   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:26.061235   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.061454   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.061984   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.062147   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:26.062276   23621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:46:26.062317   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:26.062353   23621 ssh_runner.go:195] Run: cat /version.json
	I1007 10:46:26.062375   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:26.065089   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065433   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065561   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:26.065589   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065720   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:26.065828   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:26.065853   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:26.065883   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:26.065971   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:26.066095   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:26.066095   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:26.066234   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:26.066283   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:26.066351   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:26.174687   23621 ssh_runner.go:195] Run: systemctl --version
	I1007 10:46:26.181055   23621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:46:26.339685   23621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:46:26.346234   23621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:46:26.346285   23621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:46:26.362376   23621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:46:26.362399   23621 start.go:495] detecting cgroup driver to use...
	I1007 10:46:26.362452   23621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:46:26.378080   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:46:26.392505   23621 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:46:26.392560   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:46:26.406784   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:46:26.422960   23621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:46:26.552971   23621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:46:26.690240   23621 docker.go:233] disabling docker service ...
	I1007 10:46:26.690309   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:46:26.706428   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:46:26.721025   23621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:46:26.853079   23621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:46:26.978324   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:46:26.994454   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:46:27.014137   23621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:46:27.014198   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.025749   23621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:46:27.025816   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.037748   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.049263   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.062174   23621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:46:27.074940   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.086608   23621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.104859   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:46:27.116719   23621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:46:27.127669   23621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:46:27.127745   23621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:46:27.142518   23621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:46:27.153045   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:46:27.275924   23621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:46:27.373391   23621 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:46:27.373475   23621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:46:27.378225   23621 start.go:563] Will wait 60s for crictl version
	I1007 10:46:27.378286   23621 ssh_runner.go:195] Run: which crictl
	I1007 10:46:27.382179   23621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:46:27.423267   23621 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:46:27.423395   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:46:27.453236   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:46:27.483657   23621 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:46:27.484938   23621 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:46:27.487606   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:27.487998   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:27.488028   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:27.488343   23621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:46:27.492528   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:46:27.506306   23621 kubeadm.go:883] updating cluster {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:46:27.506405   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:46:27.506452   23621 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:46:27.539872   23621 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 10:46:27.539951   23621 ssh_runner.go:195] Run: which lz4
	I1007 10:46:27.544145   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 10:46:27.544248   23621 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 10:46:27.549024   23621 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 10:46:27.549064   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 10:46:28.958319   23621 crio.go:462] duration metric: took 1.414106826s to copy over tarball
	I1007 10:46:28.958395   23621 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 10:46:30.997682   23621 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.039251996s)
	I1007 10:46:30.997713   23621 crio.go:469] duration metric: took 2.039368509s to extract the tarball
	I1007 10:46:30.997720   23621 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 10:46:31.039009   23621 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:46:31.088841   23621 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:46:31.088866   23621 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:46:31.088873   23621 kubeadm.go:934] updating node { 192.168.39.250 8443 v1.31.1 crio true true} ...
	I1007 10:46:31.089007   23621 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:46:31.089099   23621 ssh_runner.go:195] Run: crio config
	I1007 10:46:31.133611   23621 cni.go:84] Creating CNI manager for ""
	I1007 10:46:31.133634   23621 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 10:46:31.133642   23621 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:46:31.133662   23621 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406505 NodeName:ha-406505 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:46:31.133799   23621 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406505"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:46:31.133825   23621 kube-vip.go:115] generating kube-vip config ...
	I1007 10:46:31.133864   23621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:46:31.150299   23621 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:46:31.150386   23621 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:46:31.150432   23621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:46:31.160704   23621 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:46:31.160771   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 10:46:31.170635   23621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 10:46:31.188233   23621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:46:31.205276   23621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 10:46:31.222191   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1007 10:46:31.240224   23621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:46:31.244214   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:46:31.257345   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:46:31.397967   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:46:31.417027   23621 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.250
	I1007 10:46:31.417077   23621 certs.go:194] generating shared ca certs ...
	I1007 10:46:31.417100   23621 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.417284   23621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:46:31.417383   23621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:46:31.417398   23621 certs.go:256] generating profile certs ...
	I1007 10:46:31.417447   23621 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:46:31.417461   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt with IP's: []
	I1007 10:46:31.468016   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt ...
	I1007 10:46:31.468047   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt: {Name:mk762d603dc2fbb5c1297f6a7a3cc345fce24083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.468271   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key ...
	I1007 10:46:31.468286   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key: {Name:mk7067411a96e86ff81d9c76638d9b65fd88775f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.468374   23621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad
	I1007 10:46:31.468389   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.254]
	I1007 10:46:31.560197   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad ...
	I1007 10:46:31.560235   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad: {Name:mk03ccdd590c02d4a8e3fdabb8ce2b00441c3bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.560434   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad ...
	I1007 10:46:31.560450   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad: {Name:mk9acbd48737ac1a11351bcc3c9e01a19e35889d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.560533   23621 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.139948ad -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:46:31.560605   23621 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.139948ad -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:46:31.560660   23621 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:46:31.560674   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt with IP's: []
	I1007 10:46:31.824715   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt ...
	I1007 10:46:31.824745   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt: {Name:mk2f87794c4b3ce39df0df4382fd33d9633bb32b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.824924   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key ...
	I1007 10:46:31.824937   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key: {Name:mka71f56202903b2b66df7c3367c064cbfe379ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:31.825016   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:46:31.825037   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:46:31.825053   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:46:31.825068   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:46:31.825083   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:46:31.825098   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:46:31.825112   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:46:31.825130   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:46:31.825188   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:46:31.825225   23621 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:46:31.825236   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:46:31.825267   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:46:31.825296   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:46:31.825321   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:46:31.825363   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:46:31.825391   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:31.825407   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:46:31.825421   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:46:31.825934   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:46:31.854979   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:46:31.881623   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:46:31.908276   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:46:31.933657   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 10:46:31.959947   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:46:31.985851   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:46:32.010600   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:46:32.035549   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:46:32.060173   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:46:32.084842   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:46:32.110513   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:46:32.129118   23621 ssh_runner.go:195] Run: openssl version
	I1007 10:46:32.134991   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:46:32.146083   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:46:32.150750   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:46:32.150813   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:46:32.156917   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:46:32.167842   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:46:32.179302   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:46:32.184104   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:46:32.184166   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:46:32.189957   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:46:32.203820   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:46:32.218928   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:32.223877   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:32.223932   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:46:32.234358   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:46:32.254776   23621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:46:32.262324   23621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:46:32.262372   23621 kubeadm.go:392] StartCluster: {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:46:32.262436   23621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:46:32.262503   23621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:46:32.310104   23621 cri.go:89] found id: ""
	I1007 10:46:32.310161   23621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 10:46:32.319996   23621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 10:46:32.329800   23621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 10:46:32.339655   23621 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 10:46:32.339683   23621 kubeadm.go:157] found existing configuration files:
	
	I1007 10:46:32.339722   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 10:46:32.348661   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 10:46:32.348719   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 10:46:32.358855   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 10:46:32.368082   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 10:46:32.368138   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 10:46:32.378072   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 10:46:32.387338   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 10:46:32.387394   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 10:46:32.397186   23621 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 10:46:32.406684   23621 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 10:46:32.406738   23621 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 10:46:32.417090   23621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 10:46:32.545879   23621 kubeadm.go:310] W1007 10:46:32.529591     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:46:32.546834   23621 kubeadm.go:310] W1007 10:46:32.530709     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:46:32.656304   23621 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 10:46:43.090298   23621 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 10:46:43.090373   23621 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 10:46:43.090492   23621 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 10:46:43.090653   23621 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 10:46:43.090862   23621 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 10:46:43.090964   23621 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 10:46:43.092688   23621 out.go:235]   - Generating certificates and keys ...
	I1007 10:46:43.092759   23621 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 10:46:43.092833   23621 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 10:46:43.092901   23621 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 10:46:43.092950   23621 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 10:46:43.092999   23621 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 10:46:43.093054   23621 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 10:46:43.093106   23621 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 10:46:43.093205   23621 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-406505 localhost] and IPs [192.168.39.250 127.0.0.1 ::1]
	I1007 10:46:43.093261   23621 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 10:46:43.093417   23621 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-406505 localhost] and IPs [192.168.39.250 127.0.0.1 ::1]
	I1007 10:46:43.093514   23621 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 10:46:43.093567   23621 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 10:46:43.093623   23621 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 10:46:43.093706   23621 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 10:46:43.093782   23621 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 10:46:43.093856   23621 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 10:46:43.093933   23621 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 10:46:43.094023   23621 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 10:46:43.094096   23621 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 10:46:43.094210   23621 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 10:46:43.094282   23621 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 10:46:43.095798   23621 out.go:235]   - Booting up control plane ...
	I1007 10:46:43.095884   23621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 10:46:43.095971   23621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 10:46:43.096065   23621 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 10:46:43.096171   23621 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 10:46:43.096294   23621 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 10:46:43.096350   23621 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 10:46:43.096510   23621 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 10:46:43.096664   23621 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 10:46:43.096745   23621 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.992623ms
	I1007 10:46:43.096840   23621 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 10:46:43.096957   23621 kubeadm.go:310] [api-check] The API server is healthy after 6.063891261s
	I1007 10:46:43.097083   23621 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 10:46:43.097207   23621 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 10:46:43.097264   23621 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 10:46:43.097410   23621 kubeadm.go:310] [mark-control-plane] Marking the node ha-406505 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 10:46:43.097470   23621 kubeadm.go:310] [bootstrap-token] Using token: wypuxz.8mosh3hhf4vr1jtg
	I1007 10:46:43.098950   23621 out.go:235]   - Configuring RBAC rules ...
	I1007 10:46:43.099071   23621 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 10:46:43.099163   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 10:46:43.099343   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 10:46:43.099509   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 10:46:43.099662   23621 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 10:46:43.099752   23621 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 10:46:43.099910   23621 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 10:46:43.099999   23621 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 10:46:43.100092   23621 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 10:46:43.100101   23621 kubeadm.go:310] 
	I1007 10:46:43.100184   23621 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 10:46:43.100194   23621 kubeadm.go:310] 
	I1007 10:46:43.100298   23621 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 10:46:43.100307   23621 kubeadm.go:310] 
	I1007 10:46:43.100344   23621 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 10:46:43.100433   23621 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 10:46:43.100524   23621 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 10:46:43.100533   23621 kubeadm.go:310] 
	I1007 10:46:43.100614   23621 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 10:46:43.100626   23621 kubeadm.go:310] 
	I1007 10:46:43.100698   23621 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 10:46:43.100713   23621 kubeadm.go:310] 
	I1007 10:46:43.100756   23621 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 10:46:43.100822   23621 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 10:46:43.100914   23621 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 10:46:43.100930   23621 kubeadm.go:310] 
	I1007 10:46:43.101035   23621 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 10:46:43.101136   23621 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 10:46:43.101145   23621 kubeadm.go:310] 
	I1007 10:46:43.101255   23621 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wypuxz.8mosh3hhf4vr1jtg \
	I1007 10:46:43.101367   23621 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df \
	I1007 10:46:43.101400   23621 kubeadm.go:310] 	--control-plane 
	I1007 10:46:43.101407   23621 kubeadm.go:310] 
	I1007 10:46:43.101475   23621 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 10:46:43.101485   23621 kubeadm.go:310] 
	I1007 10:46:43.101546   23621 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wypuxz.8mosh3hhf4vr1jtg \
	I1007 10:46:43.101655   23621 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df 
	I1007 10:46:43.101680   23621 cni.go:84] Creating CNI manager for ""
	I1007 10:46:43.101688   23621 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 10:46:43.103490   23621 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 10:46:43.104857   23621 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 10:46:43.110599   23621 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 10:46:43.110619   23621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 10:46:43.132034   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 10:46:43.562211   23621 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 10:46:43.562270   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:43.562324   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406505 minikube.k8s.io/updated_at=2024_10_07T10_46_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=ha-406505 minikube.k8s.io/primary=true
	I1007 10:46:43.616727   23621 ops.go:34] apiserver oom_adj: -16
	I1007 10:46:43.782316   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:44.282755   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:44.782532   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:45.283204   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:45.783063   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:46.283266   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:46.783411   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:46:46.943992   23621 kubeadm.go:1113] duration metric: took 3.381769921s to wait for elevateKubeSystemPrivileges
	I1007 10:46:46.944035   23621 kubeadm.go:394] duration metric: took 14.681663569s to StartCluster
	I1007 10:46:46.944056   23621 settings.go:142] acquiring lock: {Name:mk699f217216dbe513edf6a42c79fe85f8c20124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:46.944147   23621 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:46:46.945102   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/kubeconfig: {Name:mkc8a5ce1dbafe55e056433fff5c065506f83346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:46:46.945388   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 10:46:46.945386   23621 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:46:46.945413   23621 start.go:241] waiting for startup goroutines ...
	I1007 10:46:46.945429   23621 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 10:46:46.945523   23621 addons.go:69] Setting storage-provisioner=true in profile "ha-406505"
	I1007 10:46:46.945543   23621 addons.go:234] Setting addon storage-provisioner=true in "ha-406505"
	I1007 10:46:46.945553   23621 addons.go:69] Setting default-storageclass=true in profile "ha-406505"
	I1007 10:46:46.945572   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:46:46.945583   23621 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-406505"
	I1007 10:46:46.945607   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:46:46.946008   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.946009   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.946088   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.946051   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.961784   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I1007 10:46:46.961861   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42021
	I1007 10:46:46.962343   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.962400   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.962845   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.962858   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.962977   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.962998   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.963231   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.963434   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.963629   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:46.963828   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.963879   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.966424   23621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:46:46.966748   23621 kapi.go:59] client config for ha-406505: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key", CAFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 10:46:46.967326   23621 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 10:46:46.967544   23621 addons.go:234] Setting addon default-storageclass=true in "ha-406505"
	I1007 10:46:46.967595   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:46:46.967974   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.968044   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.980041   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40697
	I1007 10:46:46.980679   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.981275   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.981307   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.981679   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.981861   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:46.982917   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I1007 10:46:46.983418   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:46.983677   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:46.983888   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:46.983902   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:46.984223   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:46.984726   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:46.984780   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:46.985635   23621 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 10:46:46.986794   23621 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:46:46.986811   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 10:46:46.986827   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:46.990137   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:46.990593   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:46.990630   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:46.990792   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:46.990980   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:46.991153   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:46.991295   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:47.000938   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I1007 10:46:47.001317   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:47.001822   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:47.001835   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:47.002157   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:47.002359   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:46:47.004192   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:46:47.004381   23621 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 10:46:47.004396   23621 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 10:46:47.004415   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:46:47.007286   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:47.007709   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:46:47.007733   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:46:47.007859   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:46:47.008018   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:46:47.008149   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:46:47.008248   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:46:47.195335   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 10:46:47.217916   23621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:46:47.332630   23621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 10:46:47.810865   23621 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 10:46:48.064696   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.064705   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.064720   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.064727   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.064985   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.065031   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.065048   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.065053   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.065058   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.064988   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.065100   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.065116   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.065125   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.065104   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.065227   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.065239   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.066429   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.066481   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.066520   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.066607   23621 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 10:46:48.066629   23621 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 10:46:48.066712   23621 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1007 10:46:48.066721   23621 round_trippers.go:469] Request Headers:
	I1007 10:46:48.066729   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:46:48.066749   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:46:48.079736   23621 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1007 10:46:48.080394   23621 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1007 10:46:48.080409   23621 round_trippers.go:469] Request Headers:
	I1007 10:46:48.080417   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:46:48.080421   23621 round_trippers.go:473]     Content-Type: application/json
	I1007 10:46:48.080424   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:46:48.082744   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:46:48.082873   23621 main.go:141] libmachine: Making call to close driver server
	I1007 10:46:48.082885   23621 main.go:141] libmachine: (ha-406505) Calling .Close
	I1007 10:46:48.083166   23621 main.go:141] libmachine: (ha-406505) DBG | Closing plugin on server side
	I1007 10:46:48.083174   23621 main.go:141] libmachine: Successfully made call to close driver server
	I1007 10:46:48.083188   23621 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 10:46:48.084834   23621 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 10:46:48.085997   23621 addons.go:510] duration metric: took 1.140572645s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 10:46:48.086031   23621 start.go:246] waiting for cluster config update ...
	I1007 10:46:48.086044   23621 start.go:255] writing updated cluster config ...
	I1007 10:46:48.087964   23621 out.go:201] 
	I1007 10:46:48.089528   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:46:48.089609   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:48.091151   23621 out.go:177] * Starting "ha-406505-m02" control-plane node in "ha-406505" cluster
	I1007 10:46:48.092447   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:46:48.092473   23621 cache.go:56] Caching tarball of preloaded images
	I1007 10:46:48.092563   23621 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:46:48.092574   23621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:46:48.092637   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:46:48.092794   23621 start.go:360] acquireMachinesLock for ha-406505-m02: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:46:48.092831   23621 start.go:364] duration metric: took 21.347µs to acquireMachinesLock for "ha-406505-m02"
	I1007 10:46:48.092855   23621 start.go:93] Provisioning new machine with config: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:46:48.092915   23621 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1007 10:46:48.094418   23621 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 10:46:48.094505   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:46:48.094537   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:46:48.110315   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I1007 10:46:48.110866   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:46:48.111379   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:46:48.111403   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:46:48.111770   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:46:48.111953   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:46:48.112082   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:46:48.112219   23621 start.go:159] libmachine.API.Create for "ha-406505" (driver="kvm2")
	I1007 10:46:48.112248   23621 client.go:168] LocalClient.Create starting
	I1007 10:46:48.112287   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:46:48.112335   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:48.112356   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:48.112422   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:46:48.112452   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:46:48.112468   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:46:48.112494   23621 main.go:141] libmachine: Running pre-create checks...
	I1007 10:46:48.112506   23621 main.go:141] libmachine: (ha-406505-m02) Calling .PreCreateCheck
	I1007 10:46:48.112657   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetConfigRaw
	I1007 10:46:48.113018   23621 main.go:141] libmachine: Creating machine...
	I1007 10:46:48.113031   23621 main.go:141] libmachine: (ha-406505-m02) Calling .Create
	I1007 10:46:48.113183   23621 main.go:141] libmachine: (ha-406505-m02) Creating KVM machine...
	I1007 10:46:48.114398   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found existing default KVM network
	I1007 10:46:48.114519   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found existing private KVM network mk-ha-406505
	I1007 10:46:48.114657   23621 main.go:141] libmachine: (ha-406505-m02) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02 ...
	I1007 10:46:48.114682   23621 main.go:141] libmachine: (ha-406505-m02) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:46:48.114793   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.114651   23988 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:48.114857   23621 main.go:141] libmachine: (ha-406505-m02) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:46:48.352057   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.351887   23988 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa...
	I1007 10:46:48.484305   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.484165   23988 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/ha-406505-m02.rawdisk...
	I1007 10:46:48.484357   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Writing magic tar header
	I1007 10:46:48.484379   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Writing SSH key tar header
	I1007 10:46:48.484391   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:48.484280   23988 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02 ...
	I1007 10:46:48.484403   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02 (perms=drwx------)
	I1007 10:46:48.484420   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:46:48.484433   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02
	I1007 10:46:48.484444   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:46:48.484459   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:46:48.484478   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:46:48.484491   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:46:48.484510   23621 main.go:141] libmachine: (ha-406505-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:46:48.484523   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:46:48.484535   23621 main.go:141] libmachine: (ha-406505-m02) Creating domain...
	I1007 10:46:48.484554   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:46:48.484571   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:46:48.484583   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:46:48.484602   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Checking permissions on dir: /home
	I1007 10:46:48.484618   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Skipping /home - not owner
	I1007 10:46:48.485445   23621 main.go:141] libmachine: (ha-406505-m02) define libvirt domain using xml: 
	I1007 10:46:48.485473   23621 main.go:141] libmachine: (ha-406505-m02) <domain type='kvm'>
	I1007 10:46:48.485489   23621 main.go:141] libmachine: (ha-406505-m02)   <name>ha-406505-m02</name>
	I1007 10:46:48.485497   23621 main.go:141] libmachine: (ha-406505-m02)   <memory unit='MiB'>2200</memory>
	I1007 10:46:48.485528   23621 main.go:141] libmachine: (ha-406505-m02)   <vcpu>2</vcpu>
	I1007 10:46:48.485552   23621 main.go:141] libmachine: (ha-406505-m02)   <features>
	I1007 10:46:48.485563   23621 main.go:141] libmachine: (ha-406505-m02)     <acpi/>
	I1007 10:46:48.485574   23621 main.go:141] libmachine: (ha-406505-m02)     <apic/>
	I1007 10:46:48.485584   23621 main.go:141] libmachine: (ha-406505-m02)     <pae/>
	I1007 10:46:48.485596   23621 main.go:141] libmachine: (ha-406505-m02)     
	I1007 10:46:48.485608   23621 main.go:141] libmachine: (ha-406505-m02)   </features>
	I1007 10:46:48.485625   23621 main.go:141] libmachine: (ha-406505-m02)   <cpu mode='host-passthrough'>
	I1007 10:46:48.485637   23621 main.go:141] libmachine: (ha-406505-m02)   
	I1007 10:46:48.485645   23621 main.go:141] libmachine: (ha-406505-m02)   </cpu>
	I1007 10:46:48.485659   23621 main.go:141] libmachine: (ha-406505-m02)   <os>
	I1007 10:46:48.485671   23621 main.go:141] libmachine: (ha-406505-m02)     <type>hvm</type>
	I1007 10:46:48.485684   23621 main.go:141] libmachine: (ha-406505-m02)     <boot dev='cdrom'/>
	I1007 10:46:48.485699   23621 main.go:141] libmachine: (ha-406505-m02)     <boot dev='hd'/>
	I1007 10:46:48.485712   23621 main.go:141] libmachine: (ha-406505-m02)     <bootmenu enable='no'/>
	I1007 10:46:48.485721   23621 main.go:141] libmachine: (ha-406505-m02)   </os>
	I1007 10:46:48.485801   23621 main.go:141] libmachine: (ha-406505-m02)   <devices>
	I1007 10:46:48.485824   23621 main.go:141] libmachine: (ha-406505-m02)     <disk type='file' device='cdrom'>
	I1007 10:46:48.485840   23621 main.go:141] libmachine: (ha-406505-m02)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/boot2docker.iso'/>
	I1007 10:46:48.485854   23621 main.go:141] libmachine: (ha-406505-m02)       <target dev='hdc' bus='scsi'/>
	I1007 10:46:48.485865   23621 main.go:141] libmachine: (ha-406505-m02)       <readonly/>
	I1007 10:46:48.485875   23621 main.go:141] libmachine: (ha-406505-m02)     </disk>
	I1007 10:46:48.485902   23621 main.go:141] libmachine: (ha-406505-m02)     <disk type='file' device='disk'>
	I1007 10:46:48.485924   23621 main.go:141] libmachine: (ha-406505-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:46:48.485938   23621 main.go:141] libmachine: (ha-406505-m02)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/ha-406505-m02.rawdisk'/>
	I1007 10:46:48.485950   23621 main.go:141] libmachine: (ha-406505-m02)       <target dev='hda' bus='virtio'/>
	I1007 10:46:48.485972   23621 main.go:141] libmachine: (ha-406505-m02)     </disk>
	I1007 10:46:48.485982   23621 main.go:141] libmachine: (ha-406505-m02)     <interface type='network'>
	I1007 10:46:48.485991   23621 main.go:141] libmachine: (ha-406505-m02)       <source network='mk-ha-406505'/>
	I1007 10:46:48.485999   23621 main.go:141] libmachine: (ha-406505-m02)       <model type='virtio'/>
	I1007 10:46:48.486005   23621 main.go:141] libmachine: (ha-406505-m02)     </interface>
	I1007 10:46:48.486013   23621 main.go:141] libmachine: (ha-406505-m02)     <interface type='network'>
	I1007 10:46:48.486025   23621 main.go:141] libmachine: (ha-406505-m02)       <source network='default'/>
	I1007 10:46:48.486034   23621 main.go:141] libmachine: (ha-406505-m02)       <model type='virtio'/>
	I1007 10:46:48.486044   23621 main.go:141] libmachine: (ha-406505-m02)     </interface>
	I1007 10:46:48.486053   23621 main.go:141] libmachine: (ha-406505-m02)     <serial type='pty'>
	I1007 10:46:48.486063   23621 main.go:141] libmachine: (ha-406505-m02)       <target port='0'/>
	I1007 10:46:48.486074   23621 main.go:141] libmachine: (ha-406505-m02)     </serial>
	I1007 10:46:48.486084   23621 main.go:141] libmachine: (ha-406505-m02)     <console type='pty'>
	I1007 10:46:48.486094   23621 main.go:141] libmachine: (ha-406505-m02)       <target type='serial' port='0'/>
	I1007 10:46:48.486098   23621 main.go:141] libmachine: (ha-406505-m02)     </console>
	I1007 10:46:48.486106   23621 main.go:141] libmachine: (ha-406505-m02)     <rng model='virtio'>
	I1007 10:46:48.486122   23621 main.go:141] libmachine: (ha-406505-m02)       <backend model='random'>/dev/random</backend>
	I1007 10:46:48.486134   23621 main.go:141] libmachine: (ha-406505-m02)     </rng>
	I1007 10:46:48.486147   23621 main.go:141] libmachine: (ha-406505-m02)     
	I1007 10:46:48.486157   23621 main.go:141] libmachine: (ha-406505-m02)     
	I1007 10:46:48.486167   23621 main.go:141] libmachine: (ha-406505-m02)   </devices>
	I1007 10:46:48.486184   23621 main.go:141] libmachine: (ha-406505-m02) </domain>
	I1007 10:46:48.486192   23621 main.go:141] libmachine: (ha-406505-m02) 
	I1007 10:46:48.492959   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:11:dc:7d in network default
	I1007 10:46:48.493532   23621 main.go:141] libmachine: (ha-406505-m02) Ensuring networks are active...
	I1007 10:46:48.493555   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:48.494204   23621 main.go:141] libmachine: (ha-406505-m02) Ensuring network default is active
	I1007 10:46:48.494531   23621 main.go:141] libmachine: (ha-406505-m02) Ensuring network mk-ha-406505 is active
	I1007 10:46:48.494994   23621 main.go:141] libmachine: (ha-406505-m02) Getting domain xml...
	I1007 10:46:48.495697   23621 main.go:141] libmachine: (ha-406505-m02) Creating domain...
	I1007 10:46:49.708066   23621 main.go:141] libmachine: (ha-406505-m02) Waiting to get IP...
	I1007 10:46:49.709797   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:49.710242   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:49.710274   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:49.710223   23988 retry.go:31] will retry after 204.773065ms: waiting for machine to come up
	I1007 10:46:49.916620   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:49.917029   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:49.917049   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:49.916992   23988 retry.go:31] will retry after 235.714104ms: waiting for machine to come up
	I1007 10:46:50.154409   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:50.154821   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:50.154854   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:50.154800   23988 retry.go:31] will retry after 473.988416ms: waiting for machine to come up
	I1007 10:46:50.630146   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:50.630593   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:50.630617   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:50.630561   23988 retry.go:31] will retry after 436.51933ms: waiting for machine to come up
	I1007 10:46:51.068126   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:51.068602   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:51.068629   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:51.068593   23988 retry.go:31] will retry after 554.772898ms: waiting for machine to come up
	I1007 10:46:51.625423   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:51.625799   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:51.625821   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:51.625760   23988 retry.go:31] will retry after 790.073775ms: waiting for machine to come up
	I1007 10:46:52.417715   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:52.418041   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:52.418068   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:52.417996   23988 retry.go:31] will retry after 1.143940138s: waiting for machine to come up
	I1007 10:46:53.563665   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:53.564172   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:53.564191   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:53.564119   23988 retry.go:31] will retry after 1.216262675s: waiting for machine to come up
	I1007 10:46:54.782182   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:54.782642   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:54.782668   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:54.782571   23988 retry.go:31] will retry after 1.336251943s: waiting for machine to come up
	I1007 10:46:56.120924   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:56.121343   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:56.121364   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:56.121297   23988 retry.go:31] will retry after 2.26253824s: waiting for machine to come up
	I1007 10:46:58.385702   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:46:58.386103   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:46:58.386134   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:46:58.386057   23988 retry.go:31] will retry after 1.827723489s: waiting for machine to come up
	I1007 10:47:00.215316   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:00.215726   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:47:00.215747   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:47:00.215701   23988 retry.go:31] will retry after 2.599258612s: waiting for machine to come up
	I1007 10:47:02.818331   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:02.818781   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:47:02.818803   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:47:02.818737   23988 retry.go:31] will retry after 3.193038382s: waiting for machine to come up
	I1007 10:47:06.014368   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:06.014784   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find current IP address of domain ha-406505-m02 in network mk-ha-406505
	I1007 10:47:06.014809   23621 main.go:141] libmachine: (ha-406505-m02) DBG | I1007 10:47:06.014743   23988 retry.go:31] will retry after 3.576827994s: waiting for machine to come up
	I1007 10:47:09.593923   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:09.594365   23621 main.go:141] libmachine: (ha-406505-m02) Found IP for machine: 192.168.39.37
	I1007 10:47:09.594385   23621 main.go:141] libmachine: (ha-406505-m02) Reserving static IP address...
	I1007 10:47:09.594399   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has current primary IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:09.594746   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find host DHCP lease matching {name: "ha-406505-m02", mac: "52:54:00:c4:d0:65", ip: "192.168.39.37"} in network mk-ha-406505
	I1007 10:47:09.668479   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Getting to WaitForSSH function...
	I1007 10:47:09.668509   23621 main.go:141] libmachine: (ha-406505-m02) Reserved static IP address: 192.168.39.37
	I1007 10:47:09.668519   23621 main.go:141] libmachine: (ha-406505-m02) Waiting for SSH to be available...
	I1007 10:47:09.670956   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:09.671275   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505
	I1007 10:47:09.671303   23621 main.go:141] libmachine: (ha-406505-m02) DBG | unable to find defined IP address of network mk-ha-406505 interface with MAC address 52:54:00:c4:d0:65
	I1007 10:47:09.671456   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH client type: external
	I1007 10:47:09.671481   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa (-rw-------)
	I1007 10:47:09.671540   23621 main.go:141] libmachine: (ha-406505-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:47:09.671566   23621 main.go:141] libmachine: (ha-406505-m02) DBG | About to run SSH command:
	I1007 10:47:09.671585   23621 main.go:141] libmachine: (ha-406505-m02) DBG | exit 0
	I1007 10:47:09.675078   23621 main.go:141] libmachine: (ha-406505-m02) DBG | SSH cmd err, output: exit status 255: 
	I1007 10:47:09.675099   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 10:47:09.675105   23621 main.go:141] libmachine: (ha-406505-m02) DBG | command : exit 0
	I1007 10:47:09.675110   23621 main.go:141] libmachine: (ha-406505-m02) DBG | err     : exit status 255
	I1007 10:47:09.675118   23621 main.go:141] libmachine: (ha-406505-m02) DBG | output  : 
	I1007 10:47:12.677242   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Getting to WaitForSSH function...
	I1007 10:47:12.679802   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.680241   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:12.680268   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.680410   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH client type: external
	I1007 10:47:12.680433   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa (-rw-------)
	I1007 10:47:12.680466   23621 main.go:141] libmachine: (ha-406505-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:47:12.680481   23621 main.go:141] libmachine: (ha-406505-m02) DBG | About to run SSH command:
	I1007 10:47:12.680494   23621 main.go:141] libmachine: (ha-406505-m02) DBG | exit 0
	I1007 10:47:12.804189   23621 main.go:141] libmachine: (ha-406505-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 10:47:12.804446   23621 main.go:141] libmachine: (ha-406505-m02) KVM machine creation complete!
	I1007 10:47:12.804774   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetConfigRaw
	I1007 10:47:12.805439   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:12.805661   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:12.805843   23621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:47:12.805857   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetState
	I1007 10:47:12.807411   23621 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:47:12.807423   23621 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:47:12.807428   23621 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:47:12.807434   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:12.809666   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.809974   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:12.810001   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.810264   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:12.810464   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.810653   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.810803   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:12.810961   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:12.811169   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:12.811184   23621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:47:12.919372   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:47:12.919420   23621 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:47:12.919430   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:12.922565   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.922966   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:12.922996   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:12.923171   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:12.923359   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.923510   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:12.923635   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:12.923785   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:12.923977   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:12.924003   23621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:47:13.033371   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:47:13.033448   23621 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:47:13.033459   23621 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:47:13.033472   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:47:13.033744   23621 buildroot.go:166] provisioning hostname "ha-406505-m02"
	I1007 10:47:13.033784   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:47:13.033956   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.036444   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.036782   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.036811   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.036919   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.037077   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.037212   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.037334   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.037500   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:13.037700   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:13.037718   23621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505-m02 && echo "ha-406505-m02" | sudo tee /etc/hostname
	I1007 10:47:13.163957   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505-m02
	
	I1007 10:47:13.164007   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.166790   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.167220   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.167245   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.167419   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.167615   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.167799   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.167934   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.168112   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:13.168270   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:13.168286   23621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:47:13.289811   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:47:13.289837   23621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:47:13.289852   23621 buildroot.go:174] setting up certificates
	I1007 10:47:13.289860   23621 provision.go:84] configureAuth start
	I1007 10:47:13.289876   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetMachineName
	I1007 10:47:13.290178   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:13.292829   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.293122   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.293145   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.293256   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.296131   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.296632   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.296661   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.296855   23621 provision.go:143] copyHostCerts
	I1007 10:47:13.296886   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:47:13.296917   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:47:13.296926   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:47:13.296997   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:47:13.297093   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:47:13.297110   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:47:13.297114   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:47:13.297137   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:47:13.297178   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:47:13.297193   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:47:13.297199   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:47:13.297219   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:47:13.297264   23621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505-m02 san=[127.0.0.1 192.168.39.37 ha-406505-m02 localhost minikube]
	I1007 10:47:13.470867   23621 provision.go:177] copyRemoteCerts
	I1007 10:47:13.470925   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:47:13.470948   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.473620   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.473865   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.473901   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.474152   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.474379   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.474538   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.474650   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:13.558906   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:47:13.558995   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:47:13.584265   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:47:13.584335   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 10:47:13.609098   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:47:13.609208   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 10:47:13.633989   23621 provision.go:87] duration metric: took 344.11512ms to configureAuth
	I1007 10:47:13.634025   23621 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:47:13.634234   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:47:13.634302   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.636945   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.637279   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.637307   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.637491   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.637663   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.637855   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.638031   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.638190   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:13.638341   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:13.638355   23621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:47:13.873602   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:47:13.873628   23621 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:47:13.873636   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetURL
	I1007 10:47:13.874889   23621 main.go:141] libmachine: (ha-406505-m02) DBG | Using libvirt version 6000000
	I1007 10:47:13.877460   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.877837   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.877860   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.878084   23621 main.go:141] libmachine: Docker is up and running!
	I1007 10:47:13.878101   23621 main.go:141] libmachine: Reticulating splines...
	I1007 10:47:13.878109   23621 client.go:171] duration metric: took 25.765852825s to LocalClient.Create
	I1007 10:47:13.878137   23621 start.go:167] duration metric: took 25.765919747s to libmachine.API.Create "ha-406505"
	I1007 10:47:13.878150   23621 start.go:293] postStartSetup for "ha-406505-m02" (driver="kvm2")
	I1007 10:47:13.878166   23621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:47:13.878189   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:13.878390   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:47:13.878411   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:13.880668   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.881014   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:13.881044   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:13.881180   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:13.881364   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:13.881519   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:13.881655   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:13.968514   23621 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:47:13.973091   23621 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:47:13.973116   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:47:13.973185   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:47:13.973262   23621 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:47:13.973272   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:47:13.973349   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:47:13.984972   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:47:14.013706   23621 start.go:296] duration metric: took 135.541721ms for postStartSetup
	I1007 10:47:14.013768   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetConfigRaw
	I1007 10:47:14.014387   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:14.017290   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.017760   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.017791   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.018011   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:47:14.018210   23621 start.go:128] duration metric: took 25.92528673s to createHost
	I1007 10:47:14.018236   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:14.020800   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.021086   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.021115   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.021288   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:14.021489   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.021660   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.021768   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:14.021952   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:47:14.022115   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.37 22 <nil> <nil>}
	I1007 10:47:14.022125   23621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:47:14.132989   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298034.110680519
	
	I1007 10:47:14.133013   23621 fix.go:216] guest clock: 1728298034.110680519
	I1007 10:47:14.133022   23621 fix.go:229] Guest: 2024-10-07 10:47:14.110680519 +0000 UTC Remote: 2024-10-07 10:47:14.018221797 +0000 UTC m=+73.371361289 (delta=92.458722ms)
	I1007 10:47:14.133040   23621 fix.go:200] guest clock delta is within tolerance: 92.458722ms
	I1007 10:47:14.133051   23621 start.go:83] releasing machines lock for "ha-406505-m02", held for 26.040206453s
	I1007 10:47:14.133067   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.133299   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:14.135869   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.136305   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.136328   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.139140   23621 out.go:177] * Found network options:
	I1007 10:47:14.140689   23621 out.go:177]   - NO_PROXY=192.168.39.250
	W1007 10:47:14.142083   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:47:14.142129   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.142678   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.142868   23621 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 10:47:14.142974   23621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:47:14.143014   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	W1007 10:47:14.143107   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:47:14.143184   23621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:47:14.143226   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 10:47:14.145983   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146148   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146289   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.146315   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146499   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:14.146575   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:14.146609   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:14.146657   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.146758   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 10:47:14.146834   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:14.146877   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 10:47:14.146982   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:14.147039   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 10:47:14.147184   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 10:47:14.387899   23621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:47:14.394771   23621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:47:14.394848   23621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:47:14.410661   23621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:47:14.410689   23621 start.go:495] detecting cgroup driver to use...
	I1007 10:47:14.410772   23621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:47:14.427868   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:47:14.444153   23621 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:47:14.444206   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:47:14.460223   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:47:14.476365   23621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:47:14.606104   23621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:47:14.745910   23621 docker.go:233] disabling docker service ...
	I1007 10:47:14.745980   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:47:14.760987   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:47:14.774829   23621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:47:14.912287   23621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:47:15.035180   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:47:15.050257   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:47:15.070114   23621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:47:15.070181   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.081232   23621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:47:15.081328   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.097360   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.109085   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.120920   23621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:47:15.132712   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.143857   23621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.162242   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:47:15.173052   23621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:47:15.183576   23621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:47:15.183636   23621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:47:15.198592   23621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:47:15.209269   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:47:15.343340   23621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:47:15.435410   23621 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:47:15.435495   23621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:47:15.440650   23621 start.go:563] Will wait 60s for crictl version
	I1007 10:47:15.440716   23621 ssh_runner.go:195] Run: which crictl
	I1007 10:47:15.445010   23621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:47:15.485747   23621 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:47:15.485842   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:47:15.514633   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:47:15.544607   23621 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:47:15.546495   23621 out.go:177]   - env NO_PROXY=192.168.39.250
	I1007 10:47:15.547763   23621 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 10:47:15.550503   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:15.550835   23621 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:47:03 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 10:47:15.550856   23621 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 10:47:15.551135   23621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:47:15.555619   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:47:15.568228   23621 mustload.go:65] Loading cluster: ha-406505
	I1007 10:47:15.568429   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:47:15.568711   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:47:15.568757   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:47:15.583930   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I1007 10:47:15.584453   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:47:15.584977   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:47:15.584999   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:47:15.585308   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:47:15.585449   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:47:15.586928   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:47:15.587242   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:47:15.587291   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:47:15.601672   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I1007 10:47:15.602061   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:47:15.602537   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:47:15.602556   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:47:15.602817   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:47:15.602964   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:47:15.603079   23621 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.37
	I1007 10:47:15.603088   23621 certs.go:194] generating shared ca certs ...
	I1007 10:47:15.603106   23621 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:47:15.603231   23621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:47:15.603292   23621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:47:15.603306   23621 certs.go:256] generating profile certs ...
	I1007 10:47:15.603393   23621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:47:15.603425   23621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39
	I1007 10:47:15.603446   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.254]
	I1007 10:47:15.744161   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39 ...
	I1007 10:47:15.744193   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39: {Name:mkae386a40e79e3b04467f9f82e8cc7ab31669ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:47:15.744370   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39 ...
	I1007 10:47:15.744387   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39: {Name:mkd96b82bea042246d2ff8a9f6d26e46ce2f8d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:47:15.744484   23621 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.9c139e39 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:47:15.744631   23621 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.9c139e39 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:47:15.744793   23621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:47:15.744812   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:47:15.744830   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:47:15.744846   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:47:15.744865   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:47:15.744882   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:47:15.744900   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:47:15.744919   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:47:15.744937   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:47:15.745001   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:47:15.745040   23621 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:47:15.745053   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:47:15.745085   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:47:15.745117   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:47:15.745148   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:47:15.745217   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:47:15.745255   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:15.745278   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:47:15.745298   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:47:15.745339   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:47:15.748712   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:15.749114   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:47:15.749137   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:15.749337   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:47:15.749533   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:47:15.749703   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:47:15.749841   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:47:15.828372   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 10:47:15.833129   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 10:47:15.845052   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 10:47:15.849337   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 10:47:15.859666   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 10:47:15.864073   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 10:47:15.882571   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 10:47:15.888480   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1007 10:47:15.901431   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 10:47:15.905968   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 10:47:15.922566   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 10:47:15.927045   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 10:47:15.940895   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:47:15.967974   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:47:15.993940   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:47:16.018147   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:47:16.043434   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 10:47:16.069121   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:47:16.093333   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:47:16.117209   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:47:16.141941   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:47:16.166358   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:47:16.191390   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:47:16.216168   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 10:47:16.233270   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 10:47:16.250510   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 10:47:16.267543   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1007 10:47:16.287073   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 10:47:16.306608   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 10:47:16.324070   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 10:47:16.341221   23621 ssh_runner.go:195] Run: openssl version
	I1007 10:47:16.347150   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:47:16.358131   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:47:16.362824   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:47:16.362874   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:47:16.368599   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:47:16.378927   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:47:16.389775   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:16.394445   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:16.394503   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:47:16.400151   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:47:16.410835   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:47:16.421451   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:47:16.425954   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:47:16.426044   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:47:16.432023   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:47:16.443765   23621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:47:16.448499   23621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:47:16.448550   23621 kubeadm.go:934] updating node {m02 192.168.39.37 8443 v1.31.1 crio true true} ...
	I1007 10:47:16.448621   23621 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:47:16.448641   23621 kube-vip.go:115] generating kube-vip config ...
	I1007 10:47:16.448674   23621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:47:16.465324   23621 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:47:16.465389   23621 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:47:16.465443   23621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:47:16.476363   23621 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 10:47:16.476434   23621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 10:47:16.487040   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 10:47:16.487085   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:47:16.487142   23621 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1007 10:47:16.487150   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:47:16.487275   23621 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1007 10:47:16.491771   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 10:47:16.491798   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 10:47:17.509026   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:47:17.524363   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:47:17.524452   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:47:17.528672   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 10:47:17.528709   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 10:47:17.599765   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:47:17.599853   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:47:17.612766   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 10:47:17.612810   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 10:47:18.077437   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 10:47:18.088177   23621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 10:47:18.105381   23621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:47:18.122405   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:47:18.142555   23621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:47:18.146470   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:47:18.159594   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:47:18.291092   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:47:18.309170   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:47:18.309657   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:47:18.309712   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:47:18.324913   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I1007 10:47:18.325340   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:47:18.325803   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:47:18.325831   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:47:18.326166   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:47:18.326334   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:47:18.326443   23621 start.go:317] joinCluster: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:47:18.326602   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 10:47:18.326630   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:47:18.329583   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:18.329975   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:47:18.330001   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:47:18.330140   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:47:18.330306   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:47:18.330451   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:47:18.330595   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:47:18.480055   23621 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:47:18.480129   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hab5tp.p59kud3l77ixefj4 --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m02 --control-plane --apiserver-advertise-address=192.168.39.37 --apiserver-bind-port=8443"
	I1007 10:47:40.053984   23621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hab5tp.p59kud3l77ixefj4 --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m02 --control-plane --apiserver-advertise-address=192.168.39.37 --apiserver-bind-port=8443": (21.573829794s)
	I1007 10:47:40.054022   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 10:47:40.624911   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406505-m02 minikube.k8s.io/updated_at=2024_10_07T10_47_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=ha-406505 minikube.k8s.io/primary=false
	I1007 10:47:40.773203   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-406505-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 10:47:40.895450   23621 start.go:319] duration metric: took 22.569002454s to joinCluster
	I1007 10:47:40.895532   23621 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:47:40.895833   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:47:40.897246   23621 out.go:177] * Verifying Kubernetes components...
	I1007 10:47:40.898575   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:47:41.187385   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:47:41.220775   23621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:47:41.221110   23621 kapi.go:59] client config for ha-406505: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key", CAFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 10:47:41.221195   23621 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.250:8443
	I1007 10:47:41.221469   23621 node_ready.go:35] waiting up to 6m0s for node "ha-406505-m02" to be "Ready" ...
	I1007 10:47:41.221568   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:41.221578   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:41.221589   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:41.221596   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:41.242142   23621 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1007 10:47:41.721789   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:41.721819   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:41.721830   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:41.721836   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:41.725638   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:42.222559   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:42.222582   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:42.222592   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:42.222597   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:42.226807   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:42.722633   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:42.722659   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:42.722670   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:42.722676   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:42.727142   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:43.222278   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:43.222306   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:43.222318   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:43.222325   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:43.225924   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:43.226434   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:43.722388   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:43.722413   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:43.722421   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:43.722426   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:43.726394   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:44.221754   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:44.221782   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:44.221791   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:44.221797   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:44.225377   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:44.722382   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:44.722405   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:44.722415   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:44.722421   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:44.726019   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:45.222002   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:45.222024   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:45.222035   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:45.222042   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:45.228065   23621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 10:47:45.228617   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:45.722139   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:45.722161   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:45.722169   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:45.722174   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:45.726310   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:46.221951   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:46.221984   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:46.221995   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:46.222001   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:46.226108   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:46.722407   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:46.722427   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:46.722434   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:46.722439   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:46.726228   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:47.222433   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:47.222457   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:47.222466   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:47.222471   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:47.226517   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:47.722508   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:47.722532   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:47.722541   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:47.722546   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:47.725944   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:47.726592   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:48.222456   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:48.222477   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:48.222487   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:48.222492   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:48.568208   23621 round_trippers.go:574] Response Status: 200 OK in 345 milliseconds
	I1007 10:47:48.721707   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:48.721729   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:48.721737   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:48.721740   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:48.725191   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:49.222104   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:49.222129   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:49.222137   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:49.222142   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:49.226421   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:49.722572   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:49.722597   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:49.722606   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:49.722610   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:49.726213   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:49.726960   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:50.222350   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:50.222373   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:50.222381   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:50.222384   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:50.226118   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:50.722605   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:50.722631   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:50.722640   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:50.722645   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:50.726160   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:51.221666   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:51.221694   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:51.221714   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:51.221721   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:51.225253   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:51.722133   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:51.722158   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:51.722167   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:51.722171   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:51.725645   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:52.221757   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:52.221780   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:52.221787   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:52.221792   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:52.226043   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:52.226536   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:52.721878   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:52.721905   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:52.721913   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:52.721917   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:52.725379   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:53.221755   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:53.221777   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:53.221786   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:53.221789   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:53.225585   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:53.721883   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:53.721908   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:53.721920   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:53.721925   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:53.725474   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:54.221694   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:54.221720   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:54.221731   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:54.221737   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:54.225868   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:54.226748   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:54.722061   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:54.722086   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:54.722094   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:54.722099   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:54.725979   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:55.221978   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:55.222010   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:55.222019   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:55.222022   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:55.225724   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:55.721884   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:55.721911   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:55.721924   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:55.721931   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:55.726067   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:56.222572   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:56.222595   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:56.222603   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:56.222606   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:56.227082   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:56.227824   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:56.722293   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:56.722317   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:56.722325   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:56.722329   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:56.726068   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:57.222438   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:57.222461   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:57.222469   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:57.222478   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:57.226913   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:57.722050   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:57.722075   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:57.722083   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:57.722087   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:57.726100   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:58.222538   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:58.222560   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:58.222568   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:58.222572   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:58.227033   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:47:58.722681   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:58.722703   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:58.722711   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:58.722717   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:58.725986   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:58.726597   23621 node_ready.go:53] node "ha-406505-m02" has status "Ready":"False"
	I1007 10:47:59.221983   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:59.222007   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:59.222015   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:59.222018   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:59.225585   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:47:59.722632   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:47:59.722658   23621 round_trippers.go:469] Request Headers:
	I1007 10:47:59.722668   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:47:59.722672   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:47:59.726213   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.222316   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:00.222339   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.222347   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.222351   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.225920   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.722449   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:00.722475   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.722484   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.722488   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.725827   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.726434   23621 node_ready.go:49] node "ha-406505-m02" has status "Ready":"True"
	I1007 10:48:00.726454   23621 node_ready.go:38] duration metric: took 19.504967744s for node "ha-406505-m02" to be "Ready" ...
	I1007 10:48:00.726462   23621 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:48:00.726536   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:00.726548   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.726555   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.726559   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.731138   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:00.737911   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.737985   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghmwd
	I1007 10:48:00.737993   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.738001   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.738005   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.741209   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.742237   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:00.742253   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.742260   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.742265   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.745097   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.745537   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.745556   23621 pod_ready.go:82] duration metric: took 7.621102ms for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.745565   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.745629   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xzc88
	I1007 10:48:00.745638   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.745645   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.745650   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.748174   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.748906   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:00.748922   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.748930   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.748936   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.751224   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.751710   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.751731   23621 pod_ready.go:82] duration metric: took 6.159383ms for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.751740   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.751799   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505
	I1007 10:48:00.751809   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.751816   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.751820   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.755074   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:00.755602   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:00.755617   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.755625   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.755629   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.758258   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.758840   23621 pod_ready.go:93] pod "etcd-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.758864   23621 pod_ready.go:82] duration metric: took 7.117967ms for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.758875   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.758941   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m02
	I1007 10:48:00.758951   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.758962   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.758969   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.761946   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.762531   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:00.762545   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.762555   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.762563   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.765249   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:48:00.765990   23621 pod_ready.go:93] pod "etcd-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:00.766010   23621 pod_ready.go:82] duration metric: took 7.127993ms for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.766024   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:00.923419   23621 request.go:632] Waited for 157.329652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:48:00.923504   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:48:00.923514   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:00.923521   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:00.923526   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:00.926903   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:01.122872   23621 request.go:632] Waited for 195.370343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.122996   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.123006   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.123014   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.123018   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.126358   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:01.127128   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:01.127149   23621 pod_ready.go:82] duration metric: took 361.118588ms for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.127159   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.322514   23621 request.go:632] Waited for 195.261429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:48:01.322571   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:48:01.322577   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.322584   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.322589   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.326760   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:01.523038   23621 request.go:632] Waited for 195.412644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:01.523093   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:01.523098   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.523105   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.523109   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.527065   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:01.527580   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:01.527599   23621 pod_ready.go:82] duration metric: took 400.432673ms for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.527611   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.722806   23621 request.go:632] Waited for 195.048611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:48:01.722880   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:48:01.722888   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.722898   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.722904   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.727096   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:01.923348   23621 request.go:632] Waited for 195.373775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.923440   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:01.923452   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:01.923463   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:01.923469   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:01.927522   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:01.927961   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:01.927977   23621 pod_ready.go:82] duration metric: took 400.359633ms for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:01.928001   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.123092   23621 request.go:632] Waited for 195.004556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:48:02.123150   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:48:02.123157   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.123164   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.123167   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.127404   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:02.323429   23621 request.go:632] Waited for 195.351342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.323503   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.323511   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.323522   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.323532   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.326657   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:02.327382   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:02.327399   23621 pod_ready.go:82] duration metric: took 399.387331ms for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.327409   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.522522   23621 request.go:632] Waited for 195.05566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:48:02.522601   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:48:02.522607   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.522615   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.522620   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.526624   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:02.722785   23621 request.go:632] Waited for 195.392665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.722866   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:02.722874   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.722885   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.722889   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.726617   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:02.727143   23621 pod_ready.go:93] pod "kube-proxy-6ng4z" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:02.727160   23621 pod_ready.go:82] duration metric: took 399.745226ms for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.727169   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:02.923398   23621 request.go:632] Waited for 196.154565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:48:02.923464   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:48:02.923473   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:02.923484   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:02.923492   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:02.926698   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.122834   23621 request.go:632] Waited for 195.347405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.122890   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.122897   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.122905   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.122909   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.126570   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.127726   23621 pod_ready.go:93] pod "kube-proxy-nlnhf" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:03.127745   23621 pod_ready.go:82] duration metric: took 400.569818ms for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.127759   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.322923   23621 request.go:632] Waited for 195.092944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:48:03.322991   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:48:03.322997   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.323004   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.323009   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.326336   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.523252   23621 request.go:632] Waited for 196.355286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.523323   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:48:03.523328   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.523336   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.523344   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.526876   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.527478   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:03.527506   23621 pod_ready.go:82] duration metric: took 399.737789ms for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.527518   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.722433   23621 request.go:632] Waited for 194.843724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:48:03.722510   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:48:03.722516   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.722524   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.722534   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.726261   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.923306   23621 request.go:632] Waited for 196.357784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:03.923362   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:48:03.923368   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.923375   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.923379   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.927011   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:03.927578   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:48:03.927594   23621 pod_ready.go:82] duration metric: took 400.068935ms for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:48:03.927605   23621 pod_ready.go:39] duration metric: took 3.201132108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:48:03.927618   23621 api_server.go:52] waiting for apiserver process to appear ...
	I1007 10:48:03.927663   23621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 10:48:03.942605   23621 api_server.go:72] duration metric: took 23.047005374s to wait for apiserver process to appear ...
	I1007 10:48:03.942635   23621 api_server.go:88] waiting for apiserver healthz status ...
	I1007 10:48:03.942653   23621 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I1007 10:48:03.947020   23621 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I1007 10:48:03.947103   23621 round_trippers.go:463] GET https://192.168.39.250:8443/version
	I1007 10:48:03.947113   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:03.947126   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:03.947134   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:03.948044   23621 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 10:48:03.948143   23621 api_server.go:141] control plane version: v1.31.1
	I1007 10:48:03.948169   23621 api_server.go:131] duration metric: took 5.525857ms to wait for apiserver health ...
	I1007 10:48:03.948178   23621 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 10:48:04.122494   23621 request.go:632] Waited for 174.227541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.122548   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.122554   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.122561   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.122565   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.127425   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:04.131821   23621 system_pods.go:59] 17 kube-system pods found
	I1007 10:48:04.131853   23621 system_pods.go:61] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:48:04.131860   23621 system_pods.go:61] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:48:04.131867   23621 system_pods.go:61] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:48:04.131873   23621 system_pods.go:61] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:48:04.131878   23621 system_pods.go:61] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:48:04.131884   23621 system_pods.go:61] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:48:04.131889   23621 system_pods.go:61] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:48:04.131893   23621 system_pods.go:61] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:48:04.131898   23621 system_pods.go:61] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:48:04.131903   23621 system_pods.go:61] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:48:04.131908   23621 system_pods.go:61] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:48:04.131914   23621 system_pods.go:61] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:48:04.131919   23621 system_pods.go:61] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:48:04.131925   23621 system_pods.go:61] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:48:04.131932   23621 system_pods.go:61] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:48:04.131939   23621 system_pods.go:61] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:48:04.131945   23621 system_pods.go:61] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:48:04.131956   23621 system_pods.go:74] duration metric: took 183.770827ms to wait for pod list to return data ...
	I1007 10:48:04.131966   23621 default_sa.go:34] waiting for default service account to be created ...
	I1007 10:48:04.323406   23621 request.go:632] Waited for 191.335119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:48:04.323466   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:48:04.323474   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.323485   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.323491   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.326946   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:48:04.327172   23621 default_sa.go:45] found service account: "default"
	I1007 10:48:04.327188   23621 default_sa.go:55] duration metric: took 195.21627ms for default service account to be created ...
	I1007 10:48:04.327195   23621 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 10:48:04.522586   23621 request.go:632] Waited for 195.315471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.522647   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:48:04.522653   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.522661   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.522664   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.527722   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:48:04.532291   23621 system_pods.go:86] 17 kube-system pods found
	I1007 10:48:04.532319   23621 system_pods.go:89] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:48:04.532328   23621 system_pods.go:89] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:48:04.532333   23621 system_pods.go:89] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:48:04.532338   23621 system_pods.go:89] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:48:04.532345   23621 system_pods.go:89] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:48:04.532350   23621 system_pods.go:89] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:48:04.532356   23621 system_pods.go:89] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:48:04.532362   23621 system_pods.go:89] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:48:04.532370   23621 system_pods.go:89] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:48:04.532380   23621 system_pods.go:89] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:48:04.532386   23621 system_pods.go:89] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:48:04.532395   23621 system_pods.go:89] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:48:04.532401   23621 system_pods.go:89] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:48:04.532409   23621 system_pods.go:89] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:48:04.532415   23621 system_pods.go:89] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:48:04.532422   23621 system_pods.go:89] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:48:04.532426   23621 system_pods.go:89] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:48:04.532436   23621 system_pods.go:126] duration metric: took 205.234668ms to wait for k8s-apps to be running ...
	I1007 10:48:04.532449   23621 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 10:48:04.532504   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:48:04.548000   23621 system_svc.go:56] duration metric: took 15.524581ms WaitForService to wait for kubelet
	I1007 10:48:04.548032   23621 kubeadm.go:582] duration metric: took 23.652436292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:48:04.548054   23621 node_conditions.go:102] verifying NodePressure condition ...
	I1007 10:48:04.723508   23621 request.go:632] Waited for 175.357529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes
	I1007 10:48:04.723563   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes
	I1007 10:48:04.723568   23621 round_trippers.go:469] Request Headers:
	I1007 10:48:04.723576   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:48:04.723585   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:48:04.728067   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:48:04.728956   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:48:04.728985   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:48:04.728999   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:48:04.729004   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:48:04.729010   23621 node_conditions.go:105] duration metric: took 180.950188ms to run NodePressure ...
	I1007 10:48:04.729032   23621 start.go:241] waiting for startup goroutines ...
	I1007 10:48:04.729064   23621 start.go:255] writing updated cluster config ...
	I1007 10:48:04.731245   23621 out.go:201] 
	I1007 10:48:04.732721   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:48:04.732820   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:48:04.734501   23621 out.go:177] * Starting "ha-406505-m03" control-plane node in "ha-406505" cluster
	I1007 10:48:04.735780   23621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:48:04.735806   23621 cache.go:56] Caching tarball of preloaded images
	I1007 10:48:04.735908   23621 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:48:04.735925   23621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:48:04.736053   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:48:04.736293   23621 start.go:360] acquireMachinesLock for ha-406505-m03: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:48:04.736354   23621 start.go:364] duration metric: took 34.69µs to acquireMachinesLock for "ha-406505-m03"
	I1007 10:48:04.736376   23621 start.go:93] Provisioning new machine with config: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:48:04.736511   23621 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1007 10:48:04.738190   23621 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 10:48:04.738285   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:04.738332   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:04.754047   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32911
	I1007 10:48:04.754525   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:04.754992   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:04.755012   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:04.755365   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:04.755518   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:04.755655   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:04.755786   23621 start.go:159] libmachine.API.Create for "ha-406505" (driver="kvm2")
	I1007 10:48:04.755817   23621 client.go:168] LocalClient.Create starting
	I1007 10:48:04.755857   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 10:48:04.755899   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:48:04.755923   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:48:04.755968   23621 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 10:48:04.755997   23621 main.go:141] libmachine: Decoding PEM data...
	I1007 10:48:04.756011   23621 main.go:141] libmachine: Parsing certificate...
	I1007 10:48:04.756031   23621 main.go:141] libmachine: Running pre-create checks...
	I1007 10:48:04.756042   23621 main.go:141] libmachine: (ha-406505-m03) Calling .PreCreateCheck
	I1007 10:48:04.756216   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetConfigRaw
	I1007 10:48:04.756599   23621 main.go:141] libmachine: Creating machine...
	I1007 10:48:04.756611   23621 main.go:141] libmachine: (ha-406505-m03) Calling .Create
	I1007 10:48:04.756765   23621 main.go:141] libmachine: (ha-406505-m03) Creating KVM machine...
	I1007 10:48:04.757963   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found existing default KVM network
	I1007 10:48:04.758099   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found existing private KVM network mk-ha-406505
	I1007 10:48:04.758232   23621 main.go:141] libmachine: (ha-406505-m03) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03 ...
	I1007 10:48:04.758273   23621 main.go:141] libmachine: (ha-406505-m03) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:48:04.758345   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:04.758258   24407 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:48:04.758425   23621 main.go:141] libmachine: (ha-406505-m03) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 10:48:05.006754   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:05.006635   24407 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa...
	I1007 10:48:05.394400   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:05.394253   24407 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/ha-406505-m03.rawdisk...
	I1007 10:48:05.394429   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Writing magic tar header
	I1007 10:48:05.394439   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Writing SSH key tar header
	I1007 10:48:05.394459   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:05.394362   24407 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03 ...
	I1007 10:48:05.394475   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03
	I1007 10:48:05.394502   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 10:48:05.394516   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03 (perms=drwx------)
	I1007 10:48:05.394522   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:48:05.394535   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 10:48:05.394541   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 10:48:05.394550   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 10:48:05.394560   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 10:48:05.394571   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 10:48:05.394584   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 10:48:05.394597   23621 main.go:141] libmachine: (ha-406505-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 10:48:05.394606   23621 main.go:141] libmachine: (ha-406505-m03) Creating domain...
	I1007 10:48:05.394611   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home/jenkins
	I1007 10:48:05.394619   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Checking permissions on dir: /home
	I1007 10:48:05.394623   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Skipping /home - not owner
	I1007 10:48:05.395724   23621 main.go:141] libmachine: (ha-406505-m03) define libvirt domain using xml: 
	I1007 10:48:05.395761   23621 main.go:141] libmachine: (ha-406505-m03) <domain type='kvm'>
	I1007 10:48:05.395773   23621 main.go:141] libmachine: (ha-406505-m03)   <name>ha-406505-m03</name>
	I1007 10:48:05.395784   23621 main.go:141] libmachine: (ha-406505-m03)   <memory unit='MiB'>2200</memory>
	I1007 10:48:05.395793   23621 main.go:141] libmachine: (ha-406505-m03)   <vcpu>2</vcpu>
	I1007 10:48:05.395802   23621 main.go:141] libmachine: (ha-406505-m03)   <features>
	I1007 10:48:05.395809   23621 main.go:141] libmachine: (ha-406505-m03)     <acpi/>
	I1007 10:48:05.395818   23621 main.go:141] libmachine: (ha-406505-m03)     <apic/>
	I1007 10:48:05.395827   23621 main.go:141] libmachine: (ha-406505-m03)     <pae/>
	I1007 10:48:05.395836   23621 main.go:141] libmachine: (ha-406505-m03)     
	I1007 10:48:05.395844   23621 main.go:141] libmachine: (ha-406505-m03)   </features>
	I1007 10:48:05.395854   23621 main.go:141] libmachine: (ha-406505-m03)   <cpu mode='host-passthrough'>
	I1007 10:48:05.395884   23621 main.go:141] libmachine: (ha-406505-m03)   
	I1007 10:48:05.395909   23621 main.go:141] libmachine: (ha-406505-m03)   </cpu>
	I1007 10:48:05.395940   23621 main.go:141] libmachine: (ha-406505-m03)   <os>
	I1007 10:48:05.395963   23621 main.go:141] libmachine: (ha-406505-m03)     <type>hvm</type>
	I1007 10:48:05.395977   23621 main.go:141] libmachine: (ha-406505-m03)     <boot dev='cdrom'/>
	I1007 10:48:05.396000   23621 main.go:141] libmachine: (ha-406505-m03)     <boot dev='hd'/>
	I1007 10:48:05.396019   23621 main.go:141] libmachine: (ha-406505-m03)     <bootmenu enable='no'/>
	I1007 10:48:05.396035   23621 main.go:141] libmachine: (ha-406505-m03)   </os>
	I1007 10:48:05.396063   23621 main.go:141] libmachine: (ha-406505-m03)   <devices>
	I1007 10:48:05.396094   23621 main.go:141] libmachine: (ha-406505-m03)     <disk type='file' device='cdrom'>
	I1007 10:48:05.396113   23621 main.go:141] libmachine: (ha-406505-m03)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/boot2docker.iso'/>
	I1007 10:48:05.396125   23621 main.go:141] libmachine: (ha-406505-m03)       <target dev='hdc' bus='scsi'/>
	I1007 10:48:05.396137   23621 main.go:141] libmachine: (ha-406505-m03)       <readonly/>
	I1007 10:48:05.396147   23621 main.go:141] libmachine: (ha-406505-m03)     </disk>
	I1007 10:48:05.396159   23621 main.go:141] libmachine: (ha-406505-m03)     <disk type='file' device='disk'>
	I1007 10:48:05.396176   23621 main.go:141] libmachine: (ha-406505-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 10:48:05.396192   23621 main.go:141] libmachine: (ha-406505-m03)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/ha-406505-m03.rawdisk'/>
	I1007 10:48:05.396207   23621 main.go:141] libmachine: (ha-406505-m03)       <target dev='hda' bus='virtio'/>
	I1007 10:48:05.396219   23621 main.go:141] libmachine: (ha-406505-m03)     </disk>
	I1007 10:48:05.396231   23621 main.go:141] libmachine: (ha-406505-m03)     <interface type='network'>
	I1007 10:48:05.396243   23621 main.go:141] libmachine: (ha-406505-m03)       <source network='mk-ha-406505'/>
	I1007 10:48:05.396258   23621 main.go:141] libmachine: (ha-406505-m03)       <model type='virtio'/>
	I1007 10:48:05.396270   23621 main.go:141] libmachine: (ha-406505-m03)     </interface>
	I1007 10:48:05.396280   23621 main.go:141] libmachine: (ha-406505-m03)     <interface type='network'>
	I1007 10:48:05.396290   23621 main.go:141] libmachine: (ha-406505-m03)       <source network='default'/>
	I1007 10:48:05.396300   23621 main.go:141] libmachine: (ha-406505-m03)       <model type='virtio'/>
	I1007 10:48:05.396309   23621 main.go:141] libmachine: (ha-406505-m03)     </interface>
	I1007 10:48:05.396320   23621 main.go:141] libmachine: (ha-406505-m03)     <serial type='pty'>
	I1007 10:48:05.396337   23621 main.go:141] libmachine: (ha-406505-m03)       <target port='0'/>
	I1007 10:48:05.396351   23621 main.go:141] libmachine: (ha-406505-m03)     </serial>
	I1007 10:48:05.396362   23621 main.go:141] libmachine: (ha-406505-m03)     <console type='pty'>
	I1007 10:48:05.396372   23621 main.go:141] libmachine: (ha-406505-m03)       <target type='serial' port='0'/>
	I1007 10:48:05.396382   23621 main.go:141] libmachine: (ha-406505-m03)     </console>
	I1007 10:48:05.396391   23621 main.go:141] libmachine: (ha-406505-m03)     <rng model='virtio'>
	I1007 10:48:05.396401   23621 main.go:141] libmachine: (ha-406505-m03)       <backend model='random'>/dev/random</backend>
	I1007 10:48:05.396411   23621 main.go:141] libmachine: (ha-406505-m03)     </rng>
	I1007 10:48:05.396418   23621 main.go:141] libmachine: (ha-406505-m03)     
	I1007 10:48:05.396427   23621 main.go:141] libmachine: (ha-406505-m03)     
	I1007 10:48:05.396436   23621 main.go:141] libmachine: (ha-406505-m03)   </devices>
	I1007 10:48:05.396454   23621 main.go:141] libmachine: (ha-406505-m03) </domain>
	I1007 10:48:05.396464   23621 main.go:141] libmachine: (ha-406505-m03) 
	I1007 10:48:05.403522   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:af:df:35 in network default
	I1007 10:48:05.404128   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:05.404146   23621 main.go:141] libmachine: (ha-406505-m03) Ensuring networks are active...
	I1007 10:48:05.404936   23621 main.go:141] libmachine: (ha-406505-m03) Ensuring network default is active
	I1007 10:48:05.405208   23621 main.go:141] libmachine: (ha-406505-m03) Ensuring network mk-ha-406505 is active
	I1007 10:48:05.405622   23621 main.go:141] libmachine: (ha-406505-m03) Getting domain xml...
	I1007 10:48:05.406377   23621 main.go:141] libmachine: (ha-406505-m03) Creating domain...
	I1007 10:48:06.663273   23621 main.go:141] libmachine: (ha-406505-m03) Waiting to get IP...
	I1007 10:48:06.664152   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:06.664559   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:06.664583   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:06.664538   24407 retry.go:31] will retry after 215.584214ms: waiting for machine to come up
	I1007 10:48:06.882094   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:06.882713   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:06.882744   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:06.882654   24407 retry.go:31] will retry after 346.060218ms: waiting for machine to come up
	I1007 10:48:07.229850   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:07.230332   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:07.230440   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:07.230280   24407 retry.go:31] will retry after 442.798208ms: waiting for machine to come up
	I1007 10:48:07.675076   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:07.675596   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:07.675620   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:07.675547   24407 retry.go:31] will retry after 562.649906ms: waiting for machine to come up
	I1007 10:48:08.240324   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:08.240767   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:08.240800   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:08.240736   24407 retry.go:31] will retry after 482.878877ms: waiting for machine to come up
	I1007 10:48:08.725445   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:08.725807   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:08.725869   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:08.725755   24407 retry.go:31] will retry after 616.205186ms: waiting for machine to come up
	I1007 10:48:09.343485   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:09.343941   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:09.344003   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:09.343909   24407 retry.go:31] will retry after 1.040138153s: waiting for machine to come up
	I1007 10:48:10.386253   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:10.386682   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:10.386713   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:10.386637   24407 retry.go:31] will retry after 1.418753496s: waiting for machine to come up
	I1007 10:48:11.807040   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:11.807484   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:11.807521   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:11.807425   24407 retry.go:31] will retry after 1.535016663s: waiting for machine to come up
	I1007 10:48:13.343720   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:13.344267   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:13.344302   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:13.344197   24407 retry.go:31] will retry after 1.769880509s: waiting for machine to come up
	I1007 10:48:15.115316   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:15.115817   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:15.115850   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:15.115759   24407 retry.go:31] will retry after 2.49899664s: waiting for machine to come up
	I1007 10:48:17.617100   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:17.617680   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:17.617710   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:17.617615   24407 retry.go:31] will retry after 2.794854441s: waiting for machine to come up
	I1007 10:48:20.413842   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:20.414235   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:20.414299   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:20.414227   24407 retry.go:31] will retry after 2.870258619s: waiting for machine to come up
	I1007 10:48:23.285865   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:23.286247   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find current IP address of domain ha-406505-m03 in network mk-ha-406505
	I1007 10:48:23.286273   23621 main.go:141] libmachine: (ha-406505-m03) DBG | I1007 10:48:23.286205   24407 retry.go:31] will retry after 5.059515205s: waiting for machine to come up
	I1007 10:48:28.350184   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:28.350662   23621 main.go:141] libmachine: (ha-406505-m03) Found IP for machine: 192.168.39.102
	I1007 10:48:28.350688   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has current primary IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:28.350700   23621 main.go:141] libmachine: (ha-406505-m03) Reserving static IP address...
	I1007 10:48:28.351065   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find host DHCP lease matching {name: "ha-406505-m03", mac: "52:54:00:7e:e4:e0", ip: "192.168.39.102"} in network mk-ha-406505
	I1007 10:48:28.431618   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Getting to WaitForSSH function...
	I1007 10:48:28.431646   23621 main.go:141] libmachine: (ha-406505-m03) Reserved static IP address: 192.168.39.102
	I1007 10:48:28.431659   23621 main.go:141] libmachine: (ha-406505-m03) Waiting for SSH to be available...
	I1007 10:48:28.434458   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:28.434796   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505
	I1007 10:48:28.434824   23621 main.go:141] libmachine: (ha-406505-m03) DBG | unable to find defined IP address of network mk-ha-406505 interface with MAC address 52:54:00:7e:e4:e0
	I1007 10:48:28.434975   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH client type: external
	I1007 10:48:28.435007   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa (-rw-------)
	I1007 10:48:28.435035   23621 main.go:141] libmachine: (ha-406505-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:48:28.435054   23621 main.go:141] libmachine: (ha-406505-m03) DBG | About to run SSH command:
	I1007 10:48:28.435085   23621 main.go:141] libmachine: (ha-406505-m03) DBG | exit 0
	I1007 10:48:28.439710   23621 main.go:141] libmachine: (ha-406505-m03) DBG | SSH cmd err, output: exit status 255: 
	I1007 10:48:28.439737   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1007 10:48:28.439768   23621 main.go:141] libmachine: (ha-406505-m03) DBG | command : exit 0
	I1007 10:48:28.439798   23621 main.go:141] libmachine: (ha-406505-m03) DBG | err     : exit status 255
	I1007 10:48:28.439811   23621 main.go:141] libmachine: (ha-406505-m03) DBG | output  : 
	I1007 10:48:31.440230   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Getting to WaitForSSH function...
	I1007 10:48:31.442839   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.443280   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.443311   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.443446   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH client type: external
	I1007 10:48:31.443482   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa (-rw-------)
	I1007 10:48:31.443520   23621 main.go:141] libmachine: (ha-406505-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 10:48:31.443544   23621 main.go:141] libmachine: (ha-406505-m03) DBG | About to run SSH command:
	I1007 10:48:31.443556   23621 main.go:141] libmachine: (ha-406505-m03) DBG | exit 0
	I1007 10:48:31.568683   23621 main.go:141] libmachine: (ha-406505-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 10:48:31.568948   23621 main.go:141] libmachine: (ha-406505-m03) KVM machine creation complete!
	I1007 10:48:31.569279   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetConfigRaw
	I1007 10:48:31.569953   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:31.570177   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:31.570345   23621 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 10:48:31.570360   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetState
	I1007 10:48:31.571674   23621 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 10:48:31.571686   23621 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 10:48:31.571691   23621 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 10:48:31.571696   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.574360   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.574751   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.574773   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.574972   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.575161   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.575318   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.575453   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.575630   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.575886   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.575901   23621 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 10:48:31.679615   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:48:31.679639   23621 main.go:141] libmachine: Detecting the provisioner...
	I1007 10:48:31.679646   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.682574   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.682919   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.682944   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.683116   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.683308   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.683480   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.683605   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.683787   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.683977   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.684002   23621 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 10:48:31.789204   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 10:48:31.789302   23621 main.go:141] libmachine: found compatible host: buildroot
	I1007 10:48:31.789319   23621 main.go:141] libmachine: Provisioning with buildroot...
	I1007 10:48:31.789332   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:31.789607   23621 buildroot.go:166] provisioning hostname "ha-406505-m03"
	I1007 10:48:31.789633   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:31.789836   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.792541   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.792898   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.792925   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.793077   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.793430   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.793697   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.793864   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.794038   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.794203   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.794220   23621 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505-m03 && echo "ha-406505-m03" | sudo tee /etc/hostname
	I1007 10:48:31.915086   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505-m03
	
	I1007 10:48:31.915117   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:31.918064   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.918448   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:31.918486   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:31.918647   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:31.918833   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.918992   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:31.919119   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:31.919284   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:31.919488   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:31.919532   23621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:48:32.033622   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:48:32.033656   23621 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:48:32.033671   23621 buildroot.go:174] setting up certificates
	I1007 10:48:32.033679   23621 provision.go:84] configureAuth start
	I1007 10:48:32.033688   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetMachineName
	I1007 10:48:32.034012   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:32.037059   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.037482   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.037516   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.037674   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.040020   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.040373   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.040394   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.040541   23621 provision.go:143] copyHostCerts
	I1007 10:48:32.040567   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:48:32.040595   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:48:32.040603   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:48:32.040668   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:48:32.040738   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:48:32.040754   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:48:32.040761   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:48:32.040784   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:48:32.040824   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:48:32.040840   23621 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:48:32.040846   23621 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:48:32.040866   23621 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:48:32.040911   23621 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505-m03 san=[127.0.0.1 192.168.39.102 ha-406505-m03 localhost minikube]
	I1007 10:48:32.221278   23621 provision.go:177] copyRemoteCerts
	I1007 10:48:32.221329   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:48:32.221355   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.224264   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.224745   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.224771   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.224993   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.225158   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.225327   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.225465   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:32.308320   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:48:32.308394   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:48:32.337349   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:48:32.337427   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 10:48:32.362724   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:48:32.362808   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:48:32.388055   23621 provision.go:87] duration metric: took 354.362269ms to configureAuth
	I1007 10:48:32.388097   23621 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:48:32.388337   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:48:32.388417   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.391464   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.391888   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.391916   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.392130   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.392314   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.392419   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.392546   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.392731   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:32.392934   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:32.392957   23621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:48:32.625746   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:48:32.625778   23621 main.go:141] libmachine: Checking connection to Docker...
	I1007 10:48:32.625788   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetURL
	I1007 10:48:32.627033   23621 main.go:141] libmachine: (ha-406505-m03) DBG | Using libvirt version 6000000
	I1007 10:48:32.629153   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.629483   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.629535   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.629659   23621 main.go:141] libmachine: Docker is up and running!
	I1007 10:48:32.629673   23621 main.go:141] libmachine: Reticulating splines...
	I1007 10:48:32.629679   23621 client.go:171] duration metric: took 27.87385173s to LocalClient.Create
	I1007 10:48:32.629697   23621 start.go:167] duration metric: took 27.873912748s to libmachine.API.Create "ha-406505"
	I1007 10:48:32.629707   23621 start.go:293] postStartSetup for "ha-406505-m03" (driver="kvm2")
	I1007 10:48:32.629716   23621 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:48:32.629732   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.629961   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:48:32.629987   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.632229   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.632615   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.632638   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.632778   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.632953   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.633107   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.633255   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:32.719017   23621 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:48:32.723755   23621 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:48:32.723780   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:48:32.723839   23621 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:48:32.723945   23621 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:48:32.723957   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:48:32.724071   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:48:32.734023   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:48:32.759071   23621 start.go:296] duration metric: took 129.349571ms for postStartSetup
	I1007 10:48:32.759128   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetConfigRaw
	I1007 10:48:32.759727   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:32.762372   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.762794   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.762825   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.763105   23621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:48:32.763346   23621 start.go:128] duration metric: took 28.026823197s to createHost
	I1007 10:48:32.763370   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.765734   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.766060   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.766091   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.766305   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.766467   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.766612   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.766764   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.766903   23621 main.go:141] libmachine: Using SSH client type: native
	I1007 10:48:32.767070   23621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1007 10:48:32.767079   23621 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:48:32.873753   23621 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298112.851911112
	
	I1007 10:48:32.873779   23621 fix.go:216] guest clock: 1728298112.851911112
	I1007 10:48:32.873789   23621 fix.go:229] Guest: 2024-10-07 10:48:32.851911112 +0000 UTC Remote: 2024-10-07 10:48:32.763358943 +0000 UTC m=+152.116498435 (delta=88.552169ms)
	I1007 10:48:32.873808   23621 fix.go:200] guest clock delta is within tolerance: 88.552169ms
	I1007 10:48:32.873815   23621 start.go:83] releasing machines lock for "ha-406505-m03", held for 28.137449792s
	I1007 10:48:32.873834   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.874113   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:32.877249   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.877618   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.877659   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.879531   23621 out.go:177] * Found network options:
	I1007 10:48:32.880848   23621 out.go:177]   - NO_PROXY=192.168.39.250,192.168.39.37
	W1007 10:48:32.882090   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 10:48:32.882109   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:48:32.882124   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.882710   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.882882   23621 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:48:32.882980   23621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:48:32.883020   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	W1007 10:48:32.883028   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 10:48:32.883048   23621 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 10:48:32.883114   23621 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:48:32.883136   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:48:32.885892   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886191   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886254   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.886279   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886434   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.886593   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.886690   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:32.886721   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:32.886723   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.886891   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:48:32.886927   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:32.887008   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:48:32.887172   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:48:32.887336   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:48:33.125827   23621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:48:33.132836   23621 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:48:33.132914   23621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:48:33.152264   23621 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 10:48:33.152289   23621 start.go:495] detecting cgroup driver to use...
	I1007 10:48:33.152363   23621 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:48:33.172642   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:48:33.190770   23621 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:48:33.190848   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:48:33.206401   23621 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:48:33.222941   23621 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:48:33.363133   23621 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:48:33.526409   23621 docker.go:233] disabling docker service ...
	I1007 10:48:33.526475   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:48:33.542837   23621 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:48:33.557673   23621 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:48:33.715377   23621 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:48:33.847470   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:48:33.862560   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:48:33.884061   23621 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:48:33.884116   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.897298   23621 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:48:33.897363   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.909096   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.921064   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.932787   23621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:48:33.944724   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.956149   23621 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.976708   23621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:48:33.988978   23621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:48:33.999874   23621 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 10:48:33.999940   23621 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 10:48:34.015557   23621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:48:34.026499   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:48:34.149992   23621 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:48:34.251227   23621 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:48:34.251293   23621 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:48:34.256863   23621 start.go:563] Will wait 60s for crictl version
	I1007 10:48:34.256915   23621 ssh_runner.go:195] Run: which crictl
	I1007 10:48:34.260970   23621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:48:34.301659   23621 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:48:34.301747   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:48:34.332633   23621 ssh_runner.go:195] Run: crio --version
	I1007 10:48:34.367466   23621 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:48:34.369001   23621 out.go:177]   - env NO_PROXY=192.168.39.250
	I1007 10:48:34.370423   23621 out.go:177]   - env NO_PROXY=192.168.39.250,192.168.39.37
	I1007 10:48:34.371711   23621 main.go:141] libmachine: (ha-406505-m03) Calling .GetIP
	I1007 10:48:34.374438   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:34.374867   23621 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:48:34.374897   23621 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:48:34.375117   23621 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:48:34.379896   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:48:34.393502   23621 mustload.go:65] Loading cluster: ha-406505
	I1007 10:48:34.393757   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:48:34.394025   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:34.394061   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:34.411296   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38513
	I1007 10:48:34.411826   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:34.412384   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:34.412408   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:34.412720   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:34.412914   23621 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:48:34.414711   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:48:34.415007   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:34.415055   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:34.431721   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34665
	I1007 10:48:34.432227   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:34.432721   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:34.432743   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:34.433085   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:34.433286   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:48:34.433443   23621 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.102
	I1007 10:48:34.433455   23621 certs.go:194] generating shared ca certs ...
	I1007 10:48:34.433473   23621 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:48:34.433653   23621 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:48:34.433694   23621 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:48:34.433704   23621 certs.go:256] generating profile certs ...
	I1007 10:48:34.433769   23621 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:48:34.433796   23621 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af
	I1007 10:48:34.433810   23621 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.102 192.168.39.254]
	I1007 10:48:34.626802   23621 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af ...
	I1007 10:48:34.626838   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af: {Name:mk4dc5899bb034b35a02970b97ee9a5705168f50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:48:34.627028   23621 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af ...
	I1007 10:48:34.627045   23621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af: {Name:mk33cc429fb28f1dd32077e7c6736b9265eee4dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:48:34.627160   23621 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.2cb567af -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:48:34.627332   23621 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.2cb567af -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:48:34.627505   23621 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:48:34.627523   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:48:34.627547   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:48:34.627570   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:48:34.627588   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:48:34.627606   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:48:34.627624   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:48:34.627650   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:48:34.648122   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:48:34.648245   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:48:34.648300   23621 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:48:34.648313   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:48:34.648345   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:48:34.648376   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:48:34.648424   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:48:34.649013   23621 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:48:34.649072   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:48:34.649091   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:34.649106   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:48:34.649154   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:48:34.652851   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:34.653287   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:48:34.653319   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:34.653480   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:48:34.653695   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:48:34.653872   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:48:34.653998   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:48:34.732255   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 10:48:34.739182   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 10:48:34.751245   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 10:48:34.755732   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 10:48:34.766849   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 10:48:34.771581   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 10:48:34.783409   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 10:48:34.788150   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1007 10:48:34.799354   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 10:48:34.804283   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 10:48:34.816354   23621 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 10:48:34.821135   23621 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 10:48:34.834977   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:48:34.863883   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:48:34.896166   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:48:34.926479   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:48:34.954664   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 10:48:34.981371   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 10:48:35.009381   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:48:35.036950   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:48:35.063824   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:48:35.091476   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:48:35.119954   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:48:35.148052   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 10:48:35.166363   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 10:48:35.186175   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 10:48:35.205554   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1007 10:48:35.223002   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 10:48:35.240092   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 10:48:35.256797   23621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 10:48:35.274939   23621 ssh_runner.go:195] Run: openssl version
	I1007 10:48:35.281362   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:48:35.293636   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:48:35.298579   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:48:35.298639   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:48:35.304753   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:48:35.315888   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:48:35.326832   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:35.331554   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:35.331619   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:48:35.337434   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:48:35.348665   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:48:35.360023   23621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:48:35.365259   23621 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:48:35.365338   23621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:48:35.372821   23621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:48:35.385592   23621 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:48:35.390405   23621 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:48:35.390455   23621 kubeadm.go:934] updating node {m03 192.168.39.102 8443 v1.31.1 crio true true} ...
	I1007 10:48:35.390529   23621 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:48:35.390554   23621 kube-vip.go:115] generating kube-vip config ...
	I1007 10:48:35.390588   23621 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:48:35.407020   23621 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:48:35.407098   23621 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:48:35.407155   23621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:48:35.417610   23621 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 10:48:35.417677   23621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 10:48:35.428405   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 10:48:35.428437   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:48:35.428436   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1007 10:48:35.428474   23621 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1007 10:48:35.428487   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:48:35.428508   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 10:48:35.428547   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 10:48:35.428511   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:48:35.446473   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 10:48:35.446517   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 10:48:35.446544   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 10:48:35.446546   23621 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:48:35.446583   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 10:48:35.446648   23621 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 10:48:35.470883   23621 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 10:48:35.470927   23621 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 10:48:36.357285   23621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 10:48:36.367780   23621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 10:48:36.389088   23621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:48:36.406417   23621 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:48:36.424782   23621 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:48:36.429051   23621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:48:36.442669   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:48:36.586820   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:48:36.605650   23621 host.go:66] Checking if "ha-406505" exists ...
	I1007 10:48:36.606095   23621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:48:36.606145   23621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:48:36.622824   23621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45931
	I1007 10:48:36.623406   23621 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:48:36.623956   23621 main.go:141] libmachine: Using API Version  1
	I1007 10:48:36.624010   23621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:48:36.624375   23621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:48:36.624602   23621 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:48:36.624756   23621 start.go:317] joinCluster: &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:48:36.624906   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 10:48:36.624922   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:48:36.628085   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:36.628498   23621 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:48:36.628533   23621 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:48:36.628663   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:48:36.628842   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:48:36.628992   23621 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:48:36.629138   23621 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:48:36.794813   23621 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:48:36.794869   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gpv0xr.ao0m8qerz0fls7pl --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m03 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443"
	I1007 10:48:59.856325   23621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gpv0xr.ao0m8qerz0fls7pl --discovery-token-ca-cert-hash sha256:0af1372f9d63e41286c8f9287aaea3172bad212ad1bff5430661d64dd44628df --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-406505-m03 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443": (23.06138473s)
	I1007 10:48:59.856362   23621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 10:49:00.490810   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-406505-m03 minikube.k8s.io/updated_at=2024_10_07T10_49_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=ha-406505 minikube.k8s.io/primary=false
	I1007 10:49:00.615125   23621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-406505-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 10:49:00.740706   23621 start.go:319] duration metric: took 24.115945375s to joinCluster
	I1007 10:49:00.740808   23621 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:49:00.741314   23621 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:49:00.742651   23621 out.go:177] * Verifying Kubernetes components...
	I1007 10:49:00.744087   23621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:49:00.980117   23621 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:49:00.996987   23621 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:49:00.997383   23621 kapi.go:59] client config for ha-406505: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key", CAFile:"/home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 10:49:00.997456   23621 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.250:8443
	I1007 10:49:00.997848   23621 node_ready.go:35] waiting up to 6m0s for node "ha-406505-m03" to be "Ready" ...
	I1007 10:49:00.997952   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:00.997963   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:00.997973   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:00.997980   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:01.002879   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:01.498022   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:01.498047   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:01.498058   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:01.498063   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:01.502144   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:01.998532   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:01.998559   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:01.998571   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:01.998580   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:02.002214   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:02.498080   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:02.498113   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:02.498126   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:02.498132   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:02.502433   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:02.998449   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:02.998474   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:02.998482   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:02.998486   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:03.001753   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:03.002481   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:03.498693   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:03.498717   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:03.498727   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:03.498732   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:03.503726   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:03.998977   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:03.999008   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:03.999019   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:03.999026   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:04.002356   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:04.498338   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:04.498365   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:04.498374   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:04.498379   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:04.502295   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:04.998619   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:04.998645   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:04.998656   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:04.998660   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:05.001641   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:05.498634   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:05.498660   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:05.498671   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:05.498677   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:05.502156   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:05.502885   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:05.998723   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:05.998794   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:05.998812   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:05.998818   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:06.003873   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:49:06.499098   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:06.499119   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:06.499126   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:06.499131   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:06.503089   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:06.998553   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:06.998587   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:06.998595   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:06.998599   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:07.002580   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:07.498710   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:07.498736   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:07.498746   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:07.498751   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:07.502124   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:07.502967   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:07.998236   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:07.998258   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:07.998267   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:07.998271   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:08.001970   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:08.498896   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:08.498918   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:08.498927   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:08.498931   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:08.502697   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:08.998532   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:08.998561   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:08.998571   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:08.998578   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:09.002002   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:09.498039   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:09.498064   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:09.498077   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:09.498084   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:09.502005   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:09.998852   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:09.998879   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:09.998887   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:09.998893   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:10.002735   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:10.003524   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:10.499000   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:10.499026   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:10.499034   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:10.499046   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:10.502792   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:10.998624   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:10.998647   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:10.998659   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:10.998663   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:11.002342   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:11.498150   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:11.498177   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:11.498186   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:11.498193   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:11.502277   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:11.998714   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:11.998735   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:11.998743   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:11.998748   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:12.002263   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:12.498755   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:12.498782   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:12.498794   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:12.498801   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:12.502981   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:12.503718   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:12.999042   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:12.999069   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:12.999079   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:12.999085   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:13.002464   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:13.498077   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:13.498101   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:13.498110   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:13.498115   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:13.501652   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:13.998309   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:13.998332   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:13.998343   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:13.998347   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:14.001704   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:14.498713   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:14.498734   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:14.498742   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:14.498745   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:14.502719   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:14.999025   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:14.999047   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:14.999055   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:14.999059   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:15.002812   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:15.003362   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:15.498817   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:15.498839   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:15.498846   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:15.498850   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:15.504009   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:49:15.998456   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:15.998477   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:15.998485   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:15.998488   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:16.001780   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:16.498830   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:16.498857   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:16.498868   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:16.498873   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:16.502631   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:16.998224   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:16.998257   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:16.998268   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:16.998274   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:17.001615   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:17.498645   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:17.498672   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:17.498684   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:17.498688   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:17.502201   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:17.502837   23621 node_ready.go:53] node "ha-406505-m03" has status "Ready":"False"
	I1007 10:49:17.998189   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:17.998213   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:17.998220   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:17.998226   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.001816   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.498415   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:18.498450   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.498462   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.498469   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.502015   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.502523   23621 node_ready.go:49] node "ha-406505-m03" has status "Ready":"True"
	I1007 10:49:18.502543   23621 node_ready.go:38] duration metric: took 17.504667395s for node "ha-406505-m03" to be "Ready" ...
	I1007 10:49:18.502551   23621 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:49:18.502632   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:18.502642   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.502650   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.502656   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.509327   23621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 10:49:18.518372   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.518459   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ghmwd
	I1007 10:49:18.518464   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.518472   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.518479   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.521616   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.522356   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:18.522371   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.522378   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.522382   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.524976   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.525512   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.525532   23621 pod_ready.go:82] duration metric: took 7.133708ms for pod "coredns-7c65d6cfc9-ghmwd" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.525541   23621 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.525593   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xzc88
	I1007 10:49:18.525602   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.525608   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.525612   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.528321   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.529035   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:18.529049   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.529055   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.529058   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.531646   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.532124   23621 pod_ready.go:93] pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.532141   23621 pod_ready.go:82] duration metric: took 6.593928ms for pod "coredns-7c65d6cfc9-xzc88" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.532153   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.532225   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505
	I1007 10:49:18.532234   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.532244   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.532249   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.534614   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.535248   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:18.535264   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.535274   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.535279   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.537970   23621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 10:49:18.538368   23621 pod_ready.go:93] pod "etcd-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.538387   23621 pod_ready.go:82] duration metric: took 6.225816ms for pod "etcd-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.538401   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.538461   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m02
	I1007 10:49:18.538472   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.538483   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.538491   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.541748   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.542359   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:18.542377   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.542389   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.542397   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.545668   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.546089   23621 pod_ready.go:93] pod "etcd-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.546104   23621 pod_ready.go:82] duration metric: took 7.695818ms for pod "etcd-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.546113   23621 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.698417   23621 request.go:632] Waited for 152.247174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m03
	I1007 10:49:18.698479   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-ha-406505-m03
	I1007 10:49:18.698485   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.698492   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.698497   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.702261   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:18.899482   23621 request.go:632] Waited for 196.389358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:18.899569   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:18.899582   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:18.899593   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:18.899603   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:18.903728   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:18.904256   23621 pod_ready.go:93] pod "etcd-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:18.904275   23621 pod_ready.go:82] duration metric: took 358.156028ms for pod "etcd-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:18.904291   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.099454   23621 request.go:632] Waited for 195.101714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:49:19.099547   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505
	I1007 10:49:19.099559   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.099569   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.099575   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.103611   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:19.298735   23621 request.go:632] Waited for 194.375211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:19.298818   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:19.298825   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.298837   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.298856   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.302548   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:19.303053   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:19.303069   23621 pod_ready.go:82] duration metric: took 398.772541ms for pod "kube-apiserver-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.303079   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.499176   23621 request.go:632] Waited for 196.018641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:49:19.499270   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m02
	I1007 10:49:19.499283   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.499296   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.499309   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.503085   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:19.699374   23621 request.go:632] Waited for 195.380837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:19.699426   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:19.699432   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.699439   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.699443   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.703099   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:19.703625   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:19.703644   23621 pod_ready.go:82] duration metric: took 400.557163ms for pod "kube-apiserver-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.703654   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:19.899212   23621 request.go:632] Waited for 195.494385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m03
	I1007 10:49:19.899266   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-406505-m03
	I1007 10:49:19.899271   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:19.899283   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:19.899289   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:19.902896   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.098927   23621 request.go:632] Waited for 195.376619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:20.098987   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:20.098993   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.099000   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.099004   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.102179   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.102740   23621 pod_ready.go:93] pod "kube-apiserver-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:20.102763   23621 pod_ready.go:82] duration metric: took 399.102679ms for pod "kube-apiserver-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.102773   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.298944   23621 request.go:632] Waited for 196.089064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:49:20.299004   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505
	I1007 10:49:20.299010   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.299017   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.299023   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.302867   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.498409   23621 request.go:632] Waited for 194.294244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:20.498569   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:20.498582   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.498592   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.498599   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.502204   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.503003   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:20.503027   23621 pod_ready.go:82] duration metric: took 400.247835ms for pod "kube-controller-manager-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.503037   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.699318   23621 request.go:632] Waited for 196.218592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:49:20.699394   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m02
	I1007 10:49:20.699405   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.699415   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.699424   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.702950   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.899287   23621 request.go:632] Waited for 195.402635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:20.899343   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:20.899349   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:20.899370   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:20.899375   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:20.903339   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:20.904141   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:20.904160   23621 pod_ready.go:82] duration metric: took 401.116067ms for pod "kube-controller-manager-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:20.904170   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.099320   23621 request.go:632] Waited for 195.054621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m03
	I1007 10:49:21.099383   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-406505-m03
	I1007 10:49:21.099391   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.099404   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.099415   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.103012   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.299153   23621 request.go:632] Waited for 195.377964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:21.299213   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:21.299218   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.299225   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.299229   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.303015   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.303516   23621 pod_ready.go:93] pod "kube-controller-manager-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:21.303534   23621 pod_ready.go:82] duration metric: took 399.355676ms for pod "kube-controller-manager-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.303543   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.498530   23621 request.go:632] Waited for 194.920994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:49:21.498597   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ng4z
	I1007 10:49:21.498603   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.498610   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.498614   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.502242   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.699351   23621 request.go:632] Waited for 196.362706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:21.699418   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:21.699423   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.699431   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.699435   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.702722   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:21.703412   23621 pod_ready.go:93] pod "kube-proxy-6ng4z" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:21.703429   23621 pod_ready.go:82] duration metric: took 399.878679ms for pod "kube-proxy-6ng4z" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.703439   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c79zf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:21.898495   23621 request.go:632] Waited for 195.001064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c79zf
	I1007 10:49:21.898570   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c79zf
	I1007 10:49:21.898576   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:21.898583   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:21.898587   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:21.903113   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:22.099311   23621 request.go:632] Waited for 195.352243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:22.099376   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:22.099384   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.099392   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.099397   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.102668   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.103269   23621 pod_ready.go:93] pod "kube-proxy-c79zf" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:22.103284   23621 pod_ready.go:82] duration metric: took 399.838704ms for pod "kube-proxy-c79zf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.103298   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.299438   23621 request.go:632] Waited for 196.048125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:49:22.299517   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nlnhf
	I1007 10:49:22.299528   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.299539   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.299548   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.303349   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.499362   23621 request.go:632] Waited for 195.369323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.499426   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.499434   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.499445   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.499452   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.503812   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:22.504569   23621 pod_ready.go:93] pod "kube-proxy-nlnhf" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:22.504595   23621 pod_ready.go:82] duration metric: took 401.287955ms for pod "kube-proxy-nlnhf" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.504608   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.698460   23621 request.go:632] Waited for 193.785531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:49:22.698548   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505
	I1007 10:49:22.698557   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.698568   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.698578   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.702017   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.898981   23621 request.go:632] Waited for 196.377795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.899067   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505
	I1007 10:49:22.899078   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:22.899089   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:22.899095   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:22.902303   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:22.903166   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:22.903182   23621 pod_ready.go:82] duration metric: took 398.566323ms for pod "kube-scheduler-ha-406505" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:22.903191   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.099385   23621 request.go:632] Waited for 196.133679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:49:23.099448   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m02
	I1007 10:49:23.099455   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.099466   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.099472   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.102786   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.298901   23621 request.go:632] Waited for 195.266193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:23.298979   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m02
	I1007 10:49:23.299002   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.299017   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.299025   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.302232   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.302790   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:23.302809   23621 pod_ready.go:82] duration metric: took 399.610952ms for pod "kube-scheduler-ha-406505-m02" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.302821   23621 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.499180   23621 request.go:632] Waited for 196.292359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m03
	I1007 10:49:23.499272   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-406505-m03
	I1007 10:49:23.499287   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.499297   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.499301   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.502869   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.699193   23621 request.go:632] Waited for 195.355503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:23.699258   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/ha-406505-m03
	I1007 10:49:23.699265   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.699273   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.699279   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.703084   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:23.703667   23621 pod_ready.go:93] pod "kube-scheduler-ha-406505-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 10:49:23.703685   23621 pod_ready.go:82] duration metric: took 400.856999ms for pod "kube-scheduler-ha-406505-m03" in "kube-system" namespace to be "Ready" ...
	I1007 10:49:23.703698   23621 pod_ready.go:39] duration metric: took 5.201137337s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:49:23.703714   23621 api_server.go:52] waiting for apiserver process to appear ...
	I1007 10:49:23.703771   23621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 10:49:23.720988   23621 api_server.go:72] duration metric: took 22.980139715s to wait for apiserver process to appear ...
	I1007 10:49:23.721017   23621 api_server.go:88] waiting for apiserver healthz status ...
	I1007 10:49:23.721038   23621 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I1007 10:49:23.727765   23621 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I1007 10:49:23.727841   23621 round_trippers.go:463] GET https://192.168.39.250:8443/version
	I1007 10:49:23.727846   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.727855   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.727860   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.728928   23621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1007 10:49:23.729002   23621 api_server.go:141] control plane version: v1.31.1
	I1007 10:49:23.729019   23621 api_server.go:131] duration metric: took 7.995236ms to wait for apiserver health ...
	I1007 10:49:23.729029   23621 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 10:49:23.899405   23621 request.go:632] Waited for 170.304588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:23.899474   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:23.899479   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:23.899494   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:23.899501   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:23.905647   23621 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 10:49:23.912018   23621 system_pods.go:59] 24 kube-system pods found
	I1007 10:49:23.912046   23621 system_pods.go:61] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:49:23.912051   23621 system_pods.go:61] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:49:23.912055   23621 system_pods.go:61] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:49:23.912059   23621 system_pods.go:61] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:49:23.912064   23621 system_pods.go:61] "etcd-ha-406505-m03" [2c0079fb-51f1-423c-8b4c-893824342cd6] Running
	I1007 10:49:23.912069   23621 system_pods.go:61] "kindnet-28vpp" [c14e8bdf-ebc5-4349-adb4-6786cd15551d] Running
	I1007 10:49:23.912074   23621 system_pods.go:61] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:49:23.912079   23621 system_pods.go:61] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:49:23.912087   23621 system_pods.go:61] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:49:23.912092   23621 system_pods.go:61] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:49:23.912101   23621 system_pods.go:61] "kube-apiserver-ha-406505-m03" [8bc80684-cd9a-40b1-94e1-02cb77917c36] Running
	I1007 10:49:23.912106   23621 system_pods.go:61] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:49:23.912111   23621 system_pods.go:61] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:49:23.912116   23621 system_pods.go:61] "kube-controller-manager-ha-406505-m03" [ab97ec1a-fb7e-42a5-b77c-721ccf85db1d] Running
	I1007 10:49:23.912120   23621 system_pods.go:61] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:49:23.912123   23621 system_pods.go:61] "kube-proxy-c79zf" [2b12aaa5-9560-459b-a3bb-e45e73a6b663] Running
	I1007 10:49:23.912129   23621 system_pods.go:61] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:49:23.912132   23621 system_pods.go:61] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:49:23.912135   23621 system_pods.go:61] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:49:23.912139   23621 system_pods.go:61] "kube-scheduler-ha-406505-m03" [da8d486f-250a-4961-ac7c-b1435c52a3ca] Running
	I1007 10:49:23.912147   23621 system_pods.go:61] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:49:23.912152   23621 system_pods.go:61] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:49:23.912155   23621 system_pods.go:61] "kube-vip-ha-406505-m03" [a90a6084-73a3-476c-9729-1d8b45c6f3fc] Running
	I1007 10:49:23.912160   23621 system_pods.go:61] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:49:23.912167   23621 system_pods.go:74] duration metric: took 183.129229ms to wait for pod list to return data ...
	I1007 10:49:23.912178   23621 default_sa.go:34] waiting for default service account to be created ...
	I1007 10:49:24.099457   23621 request.go:632] Waited for 187.192356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:49:24.099519   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I1007 10:49:24.099524   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:24.099532   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:24.099538   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:24.104028   23621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 10:49:24.104180   23621 default_sa.go:45] found service account: "default"
	I1007 10:49:24.104202   23621 default_sa.go:55] duration metric: took 192.014074ms for default service account to be created ...
	I1007 10:49:24.104214   23621 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 10:49:24.299461   23621 request.go:632] Waited for 195.156179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:24.299513   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I1007 10:49:24.299518   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:24.299525   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:24.299530   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:24.305308   23621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 10:49:24.311531   23621 system_pods.go:86] 24 kube-system pods found
	I1007 10:49:24.311559   23621 system_pods.go:89] "coredns-7c65d6cfc9-ghmwd" [8d8533b9-192b-49a8-8d17-96ffd98cb729] Running
	I1007 10:49:24.311565   23621 system_pods.go:89] "coredns-7c65d6cfc9-xzc88" [f22736c0-5ca4-4c9b-bcd4-cf95f9390507] Running
	I1007 10:49:24.311569   23621 system_pods.go:89] "etcd-ha-406505" [06acd5be-60d4-4e5d-878a-eb237eccef90] Running
	I1007 10:49:24.311575   23621 system_pods.go:89] "etcd-ha-406505-m02" [6bac8986-60bc-4067-8d14-39e7a7d89de4] Running
	I1007 10:49:24.311579   23621 system_pods.go:89] "etcd-ha-406505-m03" [2c0079fb-51f1-423c-8b4c-893824342cd6] Running
	I1007 10:49:24.311583   23621 system_pods.go:89] "kindnet-28vpp" [c14e8bdf-ebc5-4349-adb4-6786cd15551d] Running
	I1007 10:49:24.311589   23621 system_pods.go:89] "kindnet-h8fh4" [4963cef8-d0f0-47a7-a9f3-4ec6cc1cbdd2] Running
	I1007 10:49:24.311593   23621 system_pods.go:89] "kindnet-pt74h" [bb72605c-a772-4b04-a14d-02efe957c9d0] Running
	I1007 10:49:24.311599   23621 system_pods.go:89] "kube-apiserver-ha-406505" [86ec3125-9faf-431c-829c-74bedca10848] Running
	I1007 10:49:24.311602   23621 system_pods.go:89] "kube-apiserver-ha-406505-m02" [9b1f1980-971a-4059-90f3-75aa418811e9] Running
	I1007 10:49:24.311606   23621 system_pods.go:89] "kube-apiserver-ha-406505-m03" [8bc80684-cd9a-40b1-94e1-02cb77917c36] Running
	I1007 10:49:24.311611   23621 system_pods.go:89] "kube-controller-manager-ha-406505" [9f228931-e5fe-4983-aed5-71ec54a10242] Running
	I1007 10:49:24.311617   23621 system_pods.go:89] "kube-controller-manager-ha-406505-m02" [87c1a5f3-2c61-40b6-8ac6-8641fe480883] Running
	I1007 10:49:24.311620   23621 system_pods.go:89] "kube-controller-manager-ha-406505-m03" [ab97ec1a-fb7e-42a5-b77c-721ccf85db1d] Running
	I1007 10:49:24.311626   23621 system_pods.go:89] "kube-proxy-6ng4z" [0bbf71c3-f4c6-44e2-a86f-2528957fd17e] Running
	I1007 10:49:24.311629   23621 system_pods.go:89] "kube-proxy-c79zf" [2b12aaa5-9560-459b-a3bb-e45e73a6b663] Running
	I1007 10:49:24.311635   23621 system_pods.go:89] "kube-proxy-nlnhf" [053080d5-38da-4108-96aa-f4a8dbe5de91] Running
	I1007 10:49:24.311638   23621 system_pods.go:89] "kube-scheduler-ha-406505" [40a9a2f6-4f4e-48eb-8ef2-958b87cea171] Running
	I1007 10:49:24.311643   23621 system_pods.go:89] "kube-scheduler-ha-406505-m02" [b1def4a5-2143-46b6-ae4a-44b7997ec7b2] Running
	I1007 10:49:24.311646   23621 system_pods.go:89] "kube-scheduler-ha-406505-m03" [da8d486f-250a-4961-ac7c-b1435c52a3ca] Running
	I1007 10:49:24.311649   23621 system_pods.go:89] "kube-vip-ha-406505" [e31ca80b-44ca-4b0a-8d38-ba06f81592f8] Running
	I1007 10:49:24.311652   23621 system_pods.go:89] "kube-vip-ha-406505-m02" [982dab2a-18d2-409a-9d27-51f746597898] Running
	I1007 10:49:24.311655   23621 system_pods.go:89] "kube-vip-ha-406505-m03" [a90a6084-73a3-476c-9729-1d8b45c6f3fc] Running
	I1007 10:49:24.311658   23621 system_pods.go:89] "storage-provisioner" [be10b32c-e562-40ef-8b47-04cd1caf9778] Running
	I1007 10:49:24.311664   23621 system_pods.go:126] duration metric: took 207.442478ms to wait for k8s-apps to be running ...
	I1007 10:49:24.311673   23621 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 10:49:24.311718   23621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:49:24.329372   23621 system_svc.go:56] duration metric: took 17.689597ms WaitForService to wait for kubelet
	I1007 10:49:24.329408   23621 kubeadm.go:582] duration metric: took 23.588563567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:49:24.329431   23621 node_conditions.go:102] verifying NodePressure condition ...
	I1007 10:49:24.498716   23621 request.go:632] Waited for 169.197079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes
	I1007 10:49:24.498772   23621 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes
	I1007 10:49:24.498777   23621 round_trippers.go:469] Request Headers:
	I1007 10:49:24.498785   23621 round_trippers.go:473]     Accept: application/json, */*
	I1007 10:49:24.498788   23621 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 10:49:24.502487   23621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 10:49:24.503651   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:49:24.503669   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:49:24.503680   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:49:24.503684   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:49:24.503688   23621 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 10:49:24.503691   23621 node_conditions.go:123] node cpu capacity is 2
	I1007 10:49:24.503697   23621 node_conditions.go:105] duration metric: took 174.259877ms to run NodePressure ...
	I1007 10:49:24.503713   23621 start.go:241] waiting for startup goroutines ...
	I1007 10:49:24.503733   23621 start.go:255] writing updated cluster config ...
	I1007 10:49:24.504082   23621 ssh_runner.go:195] Run: rm -f paused
	I1007 10:49:24.554954   23621 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 10:49:24.557268   23621 out.go:177] * Done! kubectl is now configured to use "ha-406505" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.269392146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298407269367886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1517a267-b08f-4078-8e20-24321a43ec5b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.270390395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d59894a1-3837-41be-98b3-c3b42c132801 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.270506963Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d59894a1-3837-41be-98b3-c3b42c132801 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.270767212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d59894a1-3837-41be-98b3-c3b42c132801 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.309337367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d7024d4-cd7c-4727-848b-c01f76582f2e name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.309470127Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d7024d4-cd7c-4727-848b-c01f76582f2e name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.310593097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8d2420f-776c-4a98-93a9-1c661eb6833c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.311596690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298407311566439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8d2420f-776c-4a98-93a9-1c661eb6833c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.313296846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26d65b6f-2eba-4fc0-b9af-30f05d7afd51 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.313356524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26d65b6f-2eba-4fc0-b9af-30f05d7afd51 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.313701923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26d65b6f-2eba-4fc0-b9af-30f05d7afd51 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.355016981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=055244ab-8020-4c68-83df-6901ce6ade49 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.355104237Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=055244ab-8020-4c68-83df-6901ce6ade49 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.356349654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc38ca2e-7b46-4e84-99a2-92ba06d91219 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.357051655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298407357025869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc38ca2e-7b46-4e84-99a2-92ba06d91219 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.357800376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d37115a5-9909-4473-aac6-da708de7f1c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.357855887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d37115a5-9909-4473-aac6-da708de7f1c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.358096658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d37115a5-9909-4473-aac6-da708de7f1c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.396864936Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4cd5db8-6dfd-45b4-9f9c-78dd5a5974a6 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.396968558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4cd5db8-6dfd-45b4-9f9c-78dd5a5974a6 name=/runtime.v1.RuntimeService/Version
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.398297609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40ce4cbe-5781-4571-8baf-85aa5b80f749 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.398900675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298407398865876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40ce4cbe-5781-4571-8baf-85aa5b80f749 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.399565067Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51652d5a-8f77-42cf-8e4b-3a9739633ec5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.399643714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51652d5a-8f77-42cf-8e4b-3a9739633ec5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 10:53:27 ha-406505 crio[660]: time="2024-10-07 10:53:27.399863316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d9a2a1043aa257a5b49746cc308a94d265694817a7f0fbbd68ad171298991f1,PodSandboxId:77c3242ae96e046410e9f20005f1fbb24da588dfaee8e50c326db63c8937c8e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728298170563505966,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tzgjx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b76f90b1-386b-4eda-966f-2400d6bf4412,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cd2f018baffde621ecccf90e2bebf1bd5127de1fc5363ec02ef8a77dfe2fb6,PodSandboxId:ce1fc89e90c8e387dff576649434b5b4695d8ebba82ba26bc3f4cefcfc33c65a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728298019505152058,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be10b32c-e562-40ef-8b47-04cd1caf9778,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12,PodSandboxId:32fee1b9f25d39415e7cd76b67ef4415ac18c57e3916abbb8b84f537d05d70bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019502169374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xzc88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22736c0-5ca4-4c9b-bcd4-cf95f9390507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136,PodSandboxId:6142c38866566220880ffbabeb4647565b882222de06d8ef0ebb773d8133ce82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728298019443860618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ghmwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d8533b9-19
2b-49a8-8d17-96ffd98cb729,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec,PodSandboxId:33e535c0eb67f0d998329496b1eb04fc05224699267732416929dd8d574448c5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17282980
07472938380,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pt74h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb72605c-a772-4b04-a14d-02efe957c9d0,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff,PodSandboxId:f6d2bf974f6664bc2f90e25c4c24158ddf2d928418ed69b8269deb018a7c47d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728298007298600158,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nlnhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053080d5-38da-4108-96aa-f4a8dbe5de91,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79eb2653667b5edb3b74e57f09824040325c3d1478d135cb84dc2fc3ad5cffdf,PodSandboxId:faf0d86acd1e3016c8943a6859ab004358a6ebe57d3124f3a3f7e2cf653dcbce,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728297999352793772,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bdcf35327874f36021578ca054760a4,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887,PodSandboxId:77c273367dc317c1dbfced48163648058a222251f6c9e1c99ceebb3d8512736d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728297996346881143,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10aaa3e84694103c024dc95a3ae5c57f,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46,PodSandboxId:de56de352fe21ebf99f251426625b2e2196c9791eacf4a3a0f0cc212470d6959,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728297996306701860,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-ha-406505,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e0002ddfebe157cb7f0f09bdb94c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b,PodSandboxId:b351c9fd7630d9f6bd8758086f9311cf9cf764c9d014e815df7e65a88ac09104,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728297996266468474,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-406505,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 572e44bb4eeb4579e4fb7c299dd7cd5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750,PodSandboxId:c4fb1e79d237901ac30099998c455123e96284ff720650ca3e08edf1d232d547,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728297996234033589,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-406505,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01277ab648416b0c5ac093cf7ea4b7be,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51652d5a-8f77-42cf-8e4b-3a9739633ec5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4d9a2a1043aa2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   77c3242ae96e0       busybox-7dff88458-tzgjx
	77cd2f018baff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ce1fc89e90c8e       storage-provisioner
	b0cc4a36e486c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   32fee1b9f25d3       coredns-7c65d6cfc9-xzc88
	0ebc4ee6afc90       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   6142c38866566       coredns-7c65d6cfc9-ghmwd
	4abb8ea931227       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   33e535c0eb67f       kindnet-pt74h
	99b7425285dcb       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   f6d2bf974f666       kube-proxy-nlnhf
	79eb2653667b5       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   faf0d86acd1e3       kube-vip-ha-406505
	fa4965d1b169f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   77c273367dc31       kube-scheduler-ha-406505
	5b63558545dbd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   de56de352fe21       kube-apiserver-ha-406505
	11a16a81bf6bf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   b351c9fd7630d       etcd-ha-406505
	eb0b61d1fd920       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   c4fb1e79d2379       kube-controller-manager-ha-406505
	
	
	==> coredns [0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136] <==
	[INFO] 10.244.1.2:52141 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000229841s
	[INFO] 10.244.1.2:49387 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177541s
	[INFO] 10.244.1.2:51777 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003610459s
	[INFO] 10.244.1.2:53883 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000188749s
	[INFO] 10.244.2.2:56490 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126634s
	[INFO] 10.244.2.2:39507 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008519s
	[INFO] 10.244.2.2:51465 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085975s
	[INFO] 10.244.2.2:54662 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000141674s
	[INFO] 10.244.0.4:60148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114521s
	[INFO] 10.244.0.4:60136 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061595s
	[INFO] 10.244.0.4:58172 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000046455s
	[INFO] 10.244.0.4:37188 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001182047s
	[INFO] 10.244.0.4:43590 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115472s
	[INFO] 10.244.0.4:58012 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033373s
	[INFO] 10.244.1.2:49885 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158136s
	[INFO] 10.244.1.2:37058 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108137s
	[INFO] 10.244.1.2:53254 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014209s
	[INFO] 10.244.2.2:48605 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000226971s
	[INFO] 10.244.0.4:56354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139347s
	[INFO] 10.244.0.4:53408 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091527s
	[INFO] 10.244.1.2:56944 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148755s
	[INFO] 10.244.1.2:35017 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000240968s
	[INFO] 10.244.1.2:60956 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156011s
	[INFO] 10.244.2.2:52452 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151278s
	[INFO] 10.244.0.4:37523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081767s
	
	
	==> coredns [b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12] <==
	[INFO] 10.244.2.2:48222 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000340345s
	[INFO] 10.244.2.2:43370 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001307969s
	[INFO] 10.244.0.4:43661 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000100802s
	[INFO] 10.244.0.4:58476 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001778301s
	[INFO] 10.244.1.2:33672 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201181s
	[INFO] 10.244.1.2:45107 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000305371s
	[INFO] 10.244.2.2:49200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000294988s
	[INFO] 10.244.2.2:49393 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001850366s
	[INFO] 10.244.2.2:48213 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001471137s
	[INFO] 10.244.2.2:60468 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152254s
	[INFO] 10.244.0.4:59551 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001687745s
	[INFO] 10.244.0.4:49859 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044844s
	[INFO] 10.244.1.2:53294 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000358207s
	[INFO] 10.244.2.2:48456 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119873s
	[INFO] 10.244.2.2:52623 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000223935s
	[INFO] 10.244.2.2:35737 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161301s
	[INFO] 10.244.0.4:48948 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099818s
	[INFO] 10.244.0.4:38842 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000194312s
	[INFO] 10.244.1.2:52889 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000213247s
	[INFO] 10.244.2.2:54256 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000280783s
	[INFO] 10.244.2.2:50232 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000318899s
	[INFO] 10.244.2.2:39214 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147924s
	[INFO] 10.244.0.4:53521 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112358s
	[INFO] 10.244.0.4:49217 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161935s
	[INFO] 10.244.0.4:32867 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109582s
	
	
	==> describe nodes <==
	Name:               ha-406505
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T10_46_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:46:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:53:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:49:45 +0000   Mon, 07 Oct 2024 10:46:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-406505
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f87dab03082f46978f270a1d9209ed7f
	  System UUID:                f87dab03-082f-4697-8f27-0a1d9209ed7f
	  Boot ID:                    c90db251-8dbe-47f3-98dd-72c0b5cbd489
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tzgjx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 coredns-7c65d6cfc9-ghmwd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m40s
	  kube-system                 coredns-7c65d6cfc9-xzc88             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m40s
	  kube-system                 etcd-ha-406505                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m45s
	  kube-system                 kindnet-pt74h                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m41s
	  kube-system                 kube-apiserver-ha-406505             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m45s
	  kube-system                 kube-controller-manager-ha-406505    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 kube-proxy-nlnhf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 kube-scheduler-ha-406505             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m45s
	  kube-system                 kube-vip-ha-406505                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m39s  kube-proxy       
	  Normal  Starting                 6m45s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m45s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m45s  kubelet          Node ha-406505 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m45s  kubelet          Node ha-406505 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m45s  kubelet          Node ha-406505 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m41s  node-controller  Node ha-406505 event: Registered Node ha-406505 in Controller
	  Normal  NodeReady                6m29s  kubelet          Node ha-406505 status is now: NodeReady
	  Normal  RegisteredNode           5m41s  node-controller  Node ha-406505 event: Registered Node ha-406505 in Controller
	  Normal  RegisteredNode           4m22s  node-controller  Node ha-406505 event: Registered Node ha-406505 in Controller
	
	
	Name:               ha-406505-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_47_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:47:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:50:41 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 10:49:40 +0000   Mon, 07 Oct 2024 10:51:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.37
	  Hostname:    ha-406505-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad0b7870a2a54204abf112edd9c072ce
	  System UUID:                ad0b7870-a2a5-4204-abf1-12edd9c072ce
	  Boot ID:                    0b4627e5-d7a2-40a3-9d63-8cae53190740
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bjz2q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 etcd-ha-406505-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m47s
	  kube-system                 kindnet-h8fh4                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m49s
	  kube-system                 kube-apiserver-ha-406505-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 kube-controller-manager-ha-406505-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 kube-proxy-6ng4z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-scheduler-ha-406505-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 kube-vip-ha-406505-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m49s (x8 over 5m49s)  kubelet          Node ha-406505-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m49s (x8 over 5m49s)  kubelet          Node ha-406505-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s (x7 over 5m49s)  kubelet          Node ha-406505-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m46s                  node-controller  Node ha-406505-m02 event: Registered Node ha-406505-m02 in Controller
	  Normal  RegisteredNode           5m41s                  node-controller  Node ha-406505-m02 event: Registered Node ha-406505-m02 in Controller
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-406505-m02 event: Registered Node ha-406505-m02 in Controller
	  Normal  NodeNotReady             2m2s                   node-controller  Node ha-406505-m02 status is now: NodeNotReady
	
	
	Name:               ha-406505-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_49_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:48:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:53:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:48:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:48:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:48:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:49:57 +0000   Mon, 07 Oct 2024 10:49:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-406505-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75575a7b8eb34e0589ff800419073c6f
	  System UUID:                75575a7b-8eb3-4e05-89ff-800419073c6f
	  Boot ID:                    797c7f20-765b-4e29-a483-d65c033a2625
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ktkg9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 etcd-ha-406505-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m29s
	  kube-system                 kindnet-28vpp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m31s
	  kube-system                 kube-apiserver-ha-406505-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-controller-manager-ha-406505-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-proxy-c79zf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-scheduler-ha-406505-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-vip-ha-406505-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  RegisteredNode           4m31s                  node-controller  Node ha-406505-m03 event: Registered Node ha-406505-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m31s (x8 over 4m31s)  kubelet          Node ha-406505-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m31s (x8 over 4m31s)  kubelet          Node ha-406505-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m31s (x7 over 4m31s)  kubelet          Node ha-406505-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m26s                  node-controller  Node ha-406505-m03 event: Registered Node ha-406505-m03 in Controller
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-406505-m03 event: Registered Node ha-406505-m03 in Controller
	
	
	Name:               ha-406505-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-406505-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-406505
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_50_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:50:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-406505-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:53:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:50:36 +0000   Mon, 07 Oct 2024 10:50:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-406505-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9eb4bdac85cb424a99b5076fbfc659b6
	  System UUID:                9eb4bdac-85cb-424a-99b5-076fbfc659b6
	  Boot ID:                    6e48a403-8d50-4a51-beab-d3d8e1e29c60
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cqsll       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m22s
	  kube-system                 kube-proxy-8n5g6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  RegisteredNode           3m22s                  node-controller  Node ha-406505-m04 event: Registered Node ha-406505-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m22s (x2 over 3m22s)  kubelet          Node ha-406505-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m22s (x2 over 3m22s)  kubelet          Node ha-406505-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s (x2 over 3m22s)  kubelet          Node ha-406505-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m21s                  node-controller  Node ha-406505-m04 event: Registered Node ha-406505-m04 in Controller
	  Normal  RegisteredNode           3m21s                  node-controller  Node ha-406505-m04 event: Registered Node ha-406505-m04 in Controller
	  Normal  NodeReady                3m1s                   kubelet          Node ha-406505-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 10:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051371] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040405] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.858113] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.711350] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.602582] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.722628] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.057663] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056433] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.169114] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.137291] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.300660] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.116084] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.680655] systemd-fstab-generator[874]: Ignoring "noauto" option for root device
	[  +0.069150] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.087227] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[  +0.089104] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.196698] kauditd_printk_skb: 31 callbacks suppressed
	[ +11.900338] kauditd_printk_skb: 28 callbacks suppressed
	[Oct 7 10:47] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b] <==
	{"level":"warn","ts":"2024-10-07T10:53:27.678067Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.681993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.693706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.699785Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.716870Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.721793Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.726250Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.735704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.741813Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.749877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.751862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.759149Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.762681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.768789Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.775190Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.781174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.784985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.788170Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.791726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.798295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.804372Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.836810Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T10:53:27.850175Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.37:2380/version","remote-member-id":"6cdff3fd781adadc","error":"Get \"https://192.168.39.37:2380/version\": dial tcp 192.168.39.37:2380: i/o timeout"}
	{"level":"warn","ts":"2024-10-07T10:53:27.850243Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6cdff3fd781adadc","error":"Get \"https://192.168.39.37:2380/version\": dial tcp 192.168.39.37:2380: i/o timeout"}
	{"level":"warn","ts":"2024-10-07T10:53:27.852055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a69e859ffe38fcde","from":"a69e859ffe38fcde","remote-peer-id":"6cdff3fd781adadc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:53:27 up 7 min,  0 users,  load average: 0.78, 0.60, 0.28
	Linux ha-406505 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec] <==
	I1007 10:52:48.825838       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:52:58.833626       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I1007 10:52:58.833675       1 main.go:299] handling current node
	I1007 10:52:58.833690       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I1007 10:52:58.833695       1 main.go:322] Node ha-406505-m02 has CIDR [10.244.1.0/24] 
	I1007 10:52:58.833864       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I1007 10:52:58.833902       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:52:58.833984       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I1007 10:52:58.834007       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	I1007 10:53:08.831971       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I1007 10:53:08.832046       1 main.go:322] Node ha-406505-m02 has CIDR [10.244.1.0/24] 
	I1007 10:53:08.832167       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I1007 10:53:08.832188       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:53:08.832260       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I1007 10:53:08.832280       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	I1007 10:53:08.832356       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I1007 10:53:08.832375       1 main.go:299] handling current node
	I1007 10:53:18.831206       1 main.go:295] Handling node with IPs: map[192.168.39.250:{}]
	I1007 10:53:18.831277       1 main.go:299] handling current node
	I1007 10:53:18.831346       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I1007 10:53:18.831353       1 main.go:322] Node ha-406505-m02 has CIDR [10.244.1.0/24] 
	I1007 10:53:18.831556       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I1007 10:53:18.831582       1 main.go:322] Node ha-406505-m03 has CIDR [10.244.2.0/24] 
	I1007 10:53:18.831637       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I1007 10:53:18.831656       1 main.go:322] Node ha-406505-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5b63558545dbd5bc6d949b55a25a4e873994215448c6c82acc474c3b3804be46] <==
	W1007 10:46:41.183638       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.250]
	I1007 10:46:41.185270       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 10:46:41.191014       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 10:46:41.276253       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1007 10:46:42.491094       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1007 10:46:42.518362       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1007 10:46:42.533655       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 10:46:46.678876       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1007 10:46:46.902258       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1007 10:49:31.707971       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59314: use of closed network connection
	E1007 10:49:31.903823       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59340: use of closed network connection
	E1007 10:49:32.086294       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59358: use of closed network connection
	E1007 10:49:32.297595       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59380: use of closed network connection
	E1007 10:49:32.498258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59404: use of closed network connection
	E1007 10:49:32.676693       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59420: use of closed network connection
	E1007 10:49:32.859242       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59440: use of closed network connection
	E1007 10:49:33.057965       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59468: use of closed network connection
	E1007 10:49:33.240103       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59478: use of closed network connection
	E1007 10:49:33.559788       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59494: use of closed network connection
	E1007 10:49:33.755853       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59504: use of closed network connection
	E1007 10:49:33.944169       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59516: use of closed network connection
	E1007 10:49:34.136074       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59544: use of closed network connection
	E1007 10:49:34.332211       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59568: use of closed network connection
	E1007 10:49:34.527795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59588: use of closed network connection
	W1007 10:51:01.196929       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.250]
	
	
	==> kube-controller-manager [eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750] <==
	I1007 10:50:05.605601       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-406505-m04\" does not exist"
	I1007 10:50:05.651707       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-406505-m04" podCIDRs=["10.244.3.0/24"]
	I1007 10:50:05.651878       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:05.652095       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:05.866588       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.004135       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.156174       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.156822       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-406505-m04"
	I1007 10:50:06.254557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.312035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:06.987679       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:07.073914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:15.971952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:26.980381       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406505-m04"
	I1007 10:50:26.982232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:27.002591       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:27.205853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:50:36.177995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m04"
	I1007 10:51:25.956486       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	I1007 10:51:25.956910       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-406505-m04"
	I1007 10:51:25.977091       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	I1007 10:51:26.074899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.887988ms"
	I1007 10:51:26.075025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.368µs"
	I1007 10:51:26.200250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	I1007 10:51:31.167674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-406505-m02"
	
	
	==> kube-proxy [99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 10:46:47.887571       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 10:46:47.911134       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.250"]
	E1007 10:46:47.911278       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 10:46:47.980015       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 10:46:47.980045       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 10:46:47.980074       1 server_linux.go:169] "Using iptables Proxier"
	I1007 10:46:47.983497       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 10:46:47.984580       1 server.go:483] "Version info" version="v1.31.1"
	I1007 10:46:47.984594       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 10:46:47.987677       1 config.go:199] "Starting service config controller"
	I1007 10:46:47.988455       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 10:46:47.988871       1 config.go:105] "Starting endpoint slice config controller"
	I1007 10:46:47.988960       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 10:46:47.990124       1 config.go:328] "Starting node config controller"
	I1007 10:46:47.990263       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 10:46:48.088926       1 shared_informer.go:320] Caches are synced for service config
	I1007 10:46:48.090118       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 10:46:48.090928       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887] <==
	W1007 10:46:40.575139       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 10:46:40.575275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.704893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 10:46:40.704946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.706026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 10:46:40.706071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.735457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 10:46:40.735594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.745564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 10:46:40.745701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 10:46:40.956352       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 10:46:40.956445       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 10:46:43.102324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1007 10:50:05.717930       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cqsll\": pod kindnet-cqsll is already assigned to node \"ha-406505-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-cqsll" node="ha-406505-m04"
	E1007 10:50:05.719300       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 62093c84-d91b-44ed-a605-198bd057ee89(kube-system/kindnet-cqsll) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-cqsll"
	E1007 10:50:05.719513       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cqsll\": pod kindnet-cqsll is already assigned to node \"ha-406505-m04\"" pod="kube-system/kindnet-cqsll"
	I1007 10:50:05.719601       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cqsll" node="ha-406505-m04"
	E1007 10:50:05.720316       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8n5g6\": pod kube-proxy-8n5g6 is already assigned to node \"ha-406505-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8n5g6" node="ha-406505-m04"
	E1007 10:50:05.724984       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod df46b5c0-261e-4455-bda8-d73ef0b24faa(kube-system/kube-proxy-8n5g6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8n5g6"
	E1007 10:50:05.725159       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8n5g6\": pod kube-proxy-8n5g6 is already assigned to node \"ha-406505-m04\"" pod="kube-system/kube-proxy-8n5g6"
	I1007 10:50:05.725258       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8n5g6" node="ha-406505-m04"
	E1007 10:50:05.734867       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-957n4\": pod kindnet-957n4 is already assigned to node \"ha-406505-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-957n4" node="ha-406505-m04"
	E1007 10:50:05.736396       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 9b6e172b-6f7a-48e1-8a89-60f70e5b77f6(kube-system/kindnet-957n4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-957n4"
	E1007 10:50:05.736761       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-957n4\": pod kindnet-957n4 is already assigned to node \"ha-406505-m04\"" pod="kube-system/kindnet-957n4"
	I1007 10:50:05.736855       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-957n4" node="ha-406505-m04"
	
	
	==> kubelet <==
	Oct 07 10:51:52 ha-406505 kubelet[1306]: E1007 10:51:52.612666    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298312612090878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:52 ha-406505 kubelet[1306]: E1007 10:51:52.612749    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298312612090878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:02 ha-406505 kubelet[1306]: E1007 10:52:02.614917    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298322614471502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:02 ha-406505 kubelet[1306]: E1007 10:52:02.615287    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298322614471502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:12 ha-406505 kubelet[1306]: E1007 10:52:12.617387    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298332617012708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:12 ha-406505 kubelet[1306]: E1007 10:52:12.617780    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298332617012708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:22 ha-406505 kubelet[1306]: E1007 10:52:22.620172    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298342619770777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:22 ha-406505 kubelet[1306]: E1007 10:52:22.620593    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298342619770777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:32 ha-406505 kubelet[1306]: E1007 10:52:32.622744    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298352622225858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:32 ha-406505 kubelet[1306]: E1007 10:52:32.622792    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298352622225858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:42 ha-406505 kubelet[1306]: E1007 10:52:42.472254    1306 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 10:52:42 ha-406505 kubelet[1306]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 10:52:42 ha-406505 kubelet[1306]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 10:52:42 ha-406505 kubelet[1306]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 10:52:42 ha-406505 kubelet[1306]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 10:52:42 ha-406505 kubelet[1306]: E1007 10:52:42.624989    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298362624467928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:42 ha-406505 kubelet[1306]: E1007 10:52:42.625274    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298362624467928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:52 ha-406505 kubelet[1306]: E1007 10:52:52.627616    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298372626959180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:52 ha-406505 kubelet[1306]: E1007 10:52:52.627689    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298372626959180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:02 ha-406505 kubelet[1306]: E1007 10:53:02.630238    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298382629746151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:02 ha-406505 kubelet[1306]: E1007 10:53:02.630676    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298382629746151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:12 ha-406505 kubelet[1306]: E1007 10:53:12.633509    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298392632773901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:12 ha-406505 kubelet[1306]: E1007 10:53:12.633800    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298392632773901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:22 ha-406505 kubelet[1306]: E1007 10:53:22.637621    1306 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298402636872924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:53:22 ha-406505 kubelet[1306]: E1007 10:53:22.637649    1306 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298402636872924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406505 -n ha-406505
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406505 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (783.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-406505 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-406505 -v=7 --alsologtostderr
E1007 10:54:36.381427   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:55:08.250650   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-406505 -v=7 --alsologtostderr: exit status 82 (2m1.901066536s)

                                                
                                                
-- stdout --
	* Stopping node "ha-406505-m04"  ...
	* Stopping node "ha-406505-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 10:53:28.889823   28927 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:53:28.890392   28927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:53:28.890408   28927 out.go:358] Setting ErrFile to fd 2...
	I1007 10:53:28.890415   28927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:53:28.890936   28927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:53:28.891399   28927 out.go:352] Setting JSON to false
	I1007 10:53:28.891498   28927 mustload.go:65] Loading cluster: ha-406505
	I1007 10:53:28.891905   28927 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:53:28.892015   28927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:53:28.892195   28927 mustload.go:65] Loading cluster: ha-406505
	I1007 10:53:28.892330   28927 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:53:28.892360   28927 stop.go:39] StopHost: ha-406505-m04
	I1007 10:53:28.892733   28927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:53:28.892771   28927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:53:28.908147   28927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45971
	I1007 10:53:28.908584   28927 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:53:28.909089   28927 main.go:141] libmachine: Using API Version  1
	I1007 10:53:28.909111   28927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:53:28.909454   28927 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:53:28.911947   28927 out.go:177] * Stopping node "ha-406505-m04"  ...
	I1007 10:53:28.913628   28927 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 10:53:28.913667   28927 main.go:141] libmachine: (ha-406505-m04) Calling .DriverName
	I1007 10:53:28.913916   28927 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 10:53:28.913942   28927 main.go:141] libmachine: (ha-406505-m04) Calling .GetSSHHostname
	I1007 10:53:28.916591   28927 main.go:141] libmachine: (ha-406505-m04) DBG | domain ha-406505-m04 has defined MAC address 52:54:00:cf:03:46 in network mk-ha-406505
	I1007 10:53:28.917041   28927 main.go:141] libmachine: (ha-406505-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:03:46", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:49:50 +0000 UTC Type:0 Mac:52:54:00:cf:03:46 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-406505-m04 Clientid:01:52:54:00:cf:03:46}
	I1007 10:53:28.917070   28927 main.go:141] libmachine: (ha-406505-m04) DBG | domain ha-406505-m04 has defined IP address 192.168.39.2 and MAC address 52:54:00:cf:03:46 in network mk-ha-406505
	I1007 10:53:28.917244   28927 main.go:141] libmachine: (ha-406505-m04) Calling .GetSSHPort
	I1007 10:53:28.917418   28927 main.go:141] libmachine: (ha-406505-m04) Calling .GetSSHKeyPath
	I1007 10:53:28.917576   28927 main.go:141] libmachine: (ha-406505-m04) Calling .GetSSHUsername
	I1007 10:53:28.917701   28927 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m04/id_rsa Username:docker}
	I1007 10:53:29.013750   28927 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 10:53:29.069737   28927 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 10:53:29.124773   28927 main.go:141] libmachine: Stopping "ha-406505-m04"...
	I1007 10:53:29.124799   28927 main.go:141] libmachine: (ha-406505-m04) Calling .GetState
	I1007 10:53:29.126304   28927 main.go:141] libmachine: (ha-406505-m04) Calling .Stop
	I1007 10:53:29.129968   28927 main.go:141] libmachine: (ha-406505-m04) Waiting for machine to stop 0/120
	I1007 10:53:30.320010   28927 main.go:141] libmachine: (ha-406505-m04) Calling .GetState
	I1007 10:53:30.321289   28927 main.go:141] libmachine: Machine "ha-406505-m04" was stopped.
	I1007 10:53:30.321307   28927 stop.go:75] duration metric: took 1.407681357s to stop
	I1007 10:53:30.321347   28927 stop.go:39] StopHost: ha-406505-m03
	I1007 10:53:30.321658   28927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:53:30.321708   28927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:53:30.338769   28927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I1007 10:53:30.339273   28927 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:53:30.339800   28927 main.go:141] libmachine: Using API Version  1
	I1007 10:53:30.339820   28927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:53:30.340180   28927 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:53:30.342430   28927 out.go:177] * Stopping node "ha-406505-m03"  ...
	I1007 10:53:30.343602   28927 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 10:53:30.343624   28927 main.go:141] libmachine: (ha-406505-m03) Calling .DriverName
	I1007 10:53:30.343834   28927 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 10:53:30.343853   28927 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHHostname
	I1007 10:53:30.346982   28927 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:53:30.347527   28927 main.go:141] libmachine: (ha-406505-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e4:e0", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:48:20 +0000 UTC Type:0 Mac:52:54:00:7e:e4:e0 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-406505-m03 Clientid:01:52:54:00:7e:e4:e0}
	I1007 10:53:30.347574   28927 main.go:141] libmachine: (ha-406505-m03) DBG | domain ha-406505-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:7e:e4:e0 in network mk-ha-406505
	I1007 10:53:30.347718   28927 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHPort
	I1007 10:53:30.347892   28927 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHKeyPath
	I1007 10:53:30.348057   28927 main.go:141] libmachine: (ha-406505-m03) Calling .GetSSHUsername
	I1007 10:53:30.348193   28927 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m03/id_rsa Username:docker}
	I1007 10:53:30.437686   28927 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 10:53:30.491959   28927 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 10:53:30.546081   28927 main.go:141] libmachine: Stopping "ha-406505-m03"...
	I1007 10:53:30.546108   28927 main.go:141] libmachine: (ha-406505-m03) Calling .GetState
	I1007 10:53:30.547839   28927 main.go:141] libmachine: (ha-406505-m03) Calling .Stop
	I1007 10:53:30.551610   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 0/120
	I1007 10:53:31.553099   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 1/120
	I1007 10:53:32.554270   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 2/120
	I1007 10:53:33.555781   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 3/120
	I1007 10:53:34.557145   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 4/120
	I1007 10:53:35.559222   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 5/120
	I1007 10:53:36.560707   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 6/120
	I1007 10:53:37.562230   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 7/120
	I1007 10:53:38.563699   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 8/120
	I1007 10:53:39.565347   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 9/120
	I1007 10:53:40.567382   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 10/120
	I1007 10:53:41.569120   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 11/120
	I1007 10:53:42.570746   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 12/120
	I1007 10:53:43.572292   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 13/120
	I1007 10:53:44.573634   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 14/120
	I1007 10:53:45.575795   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 15/120
	I1007 10:53:46.577176   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 16/120
	I1007 10:53:47.579334   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 17/120
	I1007 10:53:48.580546   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 18/120
	I1007 10:53:49.582613   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 19/120
	I1007 10:53:50.585061   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 20/120
	I1007 10:53:51.586902   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 21/120
	I1007 10:53:52.588755   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 22/120
	I1007 10:53:53.590407   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 23/120
	I1007 10:53:54.591905   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 24/120
	I1007 10:53:55.593807   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 25/120
	I1007 10:53:56.595296   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 26/120
	I1007 10:53:57.596815   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 27/120
	I1007 10:53:58.598423   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 28/120
	I1007 10:53:59.599646   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 29/120
	I1007 10:54:00.601080   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 30/120
	I1007 10:54:01.603256   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 31/120
	I1007 10:54:02.604656   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 32/120
	I1007 10:54:03.606177   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 33/120
	I1007 10:54:04.607507   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 34/120
	I1007 10:54:05.609306   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 35/120
	I1007 10:54:06.610792   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 36/120
	I1007 10:54:07.612098   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 37/120
	I1007 10:54:08.613584   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 38/120
	I1007 10:54:09.614974   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 39/120
	I1007 10:54:10.616687   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 40/120
	I1007 10:54:11.617903   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 41/120
	I1007 10:54:12.619183   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 42/120
	I1007 10:54:13.620603   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 43/120
	I1007 10:54:14.622067   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 44/120
	I1007 10:54:15.624013   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 45/120
	I1007 10:54:16.625301   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 46/120
	I1007 10:54:17.626827   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 47/120
	I1007 10:54:18.628370   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 48/120
	I1007 10:54:19.630377   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 49/120
	I1007 10:54:20.632158   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 50/120
	I1007 10:54:21.634369   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 51/120
	I1007 10:54:22.635698   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 52/120
	I1007 10:54:23.637140   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 53/120
	I1007 10:54:24.638542   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 54/120
	I1007 10:54:25.640630   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 55/120
	I1007 10:54:26.642305   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 56/120
	I1007 10:54:27.643671   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 57/120
	I1007 10:54:28.645046   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 58/120
	I1007 10:54:29.646666   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 59/120
	I1007 10:54:30.648015   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 60/120
	I1007 10:54:31.649253   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 61/120
	I1007 10:54:32.650983   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 62/120
	I1007 10:54:33.652757   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 63/120
	I1007 10:54:34.654046   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 64/120
	I1007 10:54:35.656236   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 65/120
	I1007 10:54:36.657903   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 66/120
	I1007 10:54:37.659071   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 67/120
	I1007 10:54:38.660584   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 68/120
	I1007 10:54:39.662227   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 69/120
	I1007 10:54:40.663941   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 70/120
	I1007 10:54:41.665177   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 71/120
	I1007 10:54:42.666495   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 72/120
	I1007 10:54:43.667855   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 73/120
	I1007 10:54:44.669413   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 74/120
	I1007 10:54:45.670748   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 75/120
	I1007 10:54:46.671843   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 76/120
	I1007 10:54:47.673025   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 77/120
	I1007 10:54:48.674354   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 78/120
	I1007 10:54:49.676259   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 79/120
	I1007 10:54:50.678083   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 80/120
	I1007 10:54:51.679195   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 81/120
	I1007 10:54:52.680740   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 82/120
	I1007 10:54:53.682064   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 83/120
	I1007 10:54:54.683295   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 84/120
	I1007 10:54:55.684983   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 85/120
	I1007 10:54:56.686162   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 86/120
	I1007 10:54:57.687623   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 87/120
	I1007 10:54:58.689065   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 88/120
	I1007 10:54:59.690442   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 89/120
	I1007 10:55:00.692132   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 90/120
	I1007 10:55:01.693417   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 91/120
	I1007 10:55:02.694809   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 92/120
	I1007 10:55:03.696272   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 93/120
	I1007 10:55:04.697593   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 94/120
	I1007 10:55:05.699304   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 95/120
	I1007 10:55:06.701046   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 96/120
	I1007 10:55:07.702576   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 97/120
	I1007 10:55:08.703903   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 98/120
	I1007 10:55:09.705277   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 99/120
	I1007 10:55:10.707190   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 100/120
	I1007 10:55:11.708605   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 101/120
	I1007 10:55:12.710111   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 102/120
	I1007 10:55:13.711442   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 103/120
	I1007 10:55:14.712792   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 104/120
	I1007 10:55:15.714411   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 105/120
	I1007 10:55:16.715787   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 106/120
	I1007 10:55:17.717344   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 107/120
	I1007 10:55:18.718624   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 108/120
	I1007 10:55:19.720707   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 109/120
	I1007 10:55:20.722196   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 110/120
	I1007 10:55:21.723600   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 111/120
	I1007 10:55:22.725450   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 112/120
	I1007 10:55:23.726914   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 113/120
	I1007 10:55:24.728568   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 114/120
	I1007 10:55:25.730316   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 115/120
	I1007 10:55:26.731854   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 116/120
	I1007 10:55:27.733252   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 117/120
	I1007 10:55:28.734851   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 118/120
	I1007 10:55:29.736427   28927 main.go:141] libmachine: (ha-406505-m03) Waiting for machine to stop 119/120
	I1007 10:55:30.737047   28927 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1007 10:55:30.737090   28927 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1007 10:55:30.738863   28927 out.go:201] 
	W1007 10:55:30.740312   28927 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1007 10:55:30.740327   28927 out.go:270] * 
	* 
	W1007 10:55:30.742619   28927 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 10:55:30.744365   28927 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-406505 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-406505 --wait=true -v=7 --alsologtostderr
E1007 10:55:35.954894   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:59:36.380839   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:00:08.249934   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:00:59.446849   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:04:36.381165   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:05:08.250777   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-406505 --wait=true -v=7 --alsologtostderr: exit status 80 (10m55.837246848s)

                                                
                                                
-- stdout --
	* [ha-406505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-406505" primary control-plane node in "ha-406505" cluster
	* Updating the running kvm2 "ha-406505" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-406505-m02" control-plane node in "ha-406505" cluster
	* Restarting existing kvm2 VM for "ha-406505-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.250
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.250
	* Verifying Kubernetes components...
	
	* Starting "ha-406505-m03" control-plane node in "ha-406505" cluster
	* Restarting existing kvm2 VM for "ha-406505-m03" ...
	* Found network options:
	  - NO_PROXY=192.168.39.250,192.168.39.37
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.250
	  - env NO_PROXY=192.168.39.250,192.168.39.37
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 10:55:30.794033   29447 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:55:30.794343   29447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:55:30.794353   29447 out.go:358] Setting ErrFile to fd 2...
	I1007 10:55:30.794358   29447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:55:30.794636   29447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:55:30.795389   29447 out.go:352] Setting JSON to false
	I1007 10:55:30.796394   29447 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2225,"bootTime":1728296306,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:55:30.796491   29447 start.go:139] virtualization: kvm guest
	I1007 10:55:30.799104   29447 out.go:177] * [ha-406505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:55:30.800626   29447 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:55:30.800640   29447 notify.go:220] Checking for updates...
	I1007 10:55:30.803410   29447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:55:30.804914   29447 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:55:30.806210   29447 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:55:30.807469   29447 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:55:30.808873   29447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:55:30.810633   29447 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:55:30.810741   29447 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:55:30.811301   29447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:55:30.811382   29447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:55:30.827997   29447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I1007 10:55:30.828419   29447 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:55:30.828927   29447 main.go:141] libmachine: Using API Version  1
	I1007 10:55:30.828950   29447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:55:30.829275   29447 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:55:30.829462   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.866638   29447 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 10:55:30.867832   29447 start.go:297] selected driver: kvm2
	I1007 10:55:30.867847   29447 start.go:901] validating driver "kvm2" against &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:55:30.867993   29447 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:55:30.868324   29447 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:55:30.868393   29447 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:55:30.883607   29447 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:55:30.884398   29447 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:55:30.884430   29447 cni.go:84] Creating CNI manager for ""
	I1007 10:55:30.884477   29447 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 10:55:30.884532   29447 start.go:340] cluster config:
	{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:55:30.884667   29447 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:55:30.887625   29447 out.go:177] * Starting "ha-406505" primary control-plane node in "ha-406505" cluster
	I1007 10:55:30.889130   29447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:55:30.889173   29447 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 10:55:30.889180   29447 cache.go:56] Caching tarball of preloaded images
	I1007 10:55:30.889265   29447 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:55:30.889276   29447 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:55:30.889406   29447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:55:30.889609   29447 start.go:360] acquireMachinesLock for ha-406505: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:55:30.889652   29447 start.go:364] duration metric: took 24.494µs to acquireMachinesLock for "ha-406505"
	I1007 10:55:30.889665   29447 start.go:96] Skipping create...Using existing machine configuration
	I1007 10:55:30.889672   29447 fix.go:54] fixHost starting: 
	I1007 10:55:30.889919   29447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:55:30.889956   29447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:55:30.905409   29447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1007 10:55:30.905796   29447 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:55:30.906241   29447 main.go:141] libmachine: Using API Version  1
	I1007 10:55:30.906267   29447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:55:30.906599   29447 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:55:30.906789   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.906907   29447 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:55:30.908591   29447 fix.go:112] recreateIfNeeded on ha-406505: state=Running err=<nil>
	W1007 10:55:30.908611   29447 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 10:55:30.911510   29447 out.go:177] * Updating the running kvm2 "ha-406505" VM ...
	I1007 10:55:30.912725   29447 machine.go:93] provisionDockerMachine start ...
	I1007 10:55:30.912748   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.913010   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:30.915628   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:30.916120   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:30.916146   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:30.916330   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:30.916511   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:30.916680   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:30.916822   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:30.916955   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:30.917153   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:30.917166   29447 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 10:55:31.033780   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:55:31.033807   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.034055   29447 buildroot.go:166] provisioning hostname "ha-406505"
	I1007 10:55:31.034084   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.034284   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.036957   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.037413   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.037434   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.037635   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.037817   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.037986   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.038124   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.038289   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.038459   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.038471   29447 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505 && echo "ha-406505" | sudo tee /etc/hostname
	I1007 10:55:31.163165   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:55:31.163191   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.165768   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.166076   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.166103   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.166240   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.166482   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.166659   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.166867   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.167037   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.167200   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.167215   29447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:55:31.281078   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:55:31.281115   29447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:55:31.281162   29447 buildroot.go:174] setting up certificates
	I1007 10:55:31.281174   29447 provision.go:84] configureAuth start
	I1007 10:55:31.281188   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.281444   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:55:31.283970   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.284388   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.284407   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.284595   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.287215   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.287589   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.287607   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.287775   29447 provision.go:143] copyHostCerts
	I1007 10:55:31.287819   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:55:31.287852   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:55:31.287869   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:55:31.287940   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:55:31.288067   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:55:31.288094   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:55:31.288104   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:55:31.288150   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:55:31.288213   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:55:31.288231   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:55:31.288238   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:55:31.288273   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:55:31.288330   29447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505 san=[127.0.0.1 192.168.39.250 ha-406505 localhost minikube]
	I1007 10:55:31.355824   29447 provision.go:177] copyRemoteCerts
	I1007 10:55:31.355877   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:55:31.355903   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.358704   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.359013   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.359045   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.359197   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.359373   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.359532   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.359697   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:55:31.447226   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:55:31.447288   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:55:31.474841   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:55:31.474941   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 10:55:31.503482   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:55:31.503562   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:55:31.530808   29447 provision.go:87] duration metric: took 249.62125ms to configureAuth
	I1007 10:55:31.530835   29447 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:55:31.531044   29447 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:55:31.531130   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.534412   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.534867   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.534899   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.535087   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.535266   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.535472   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.535637   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.535791   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.535959   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.536003   29447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:57:02.380736   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:57:02.380814   29447 machine.go:96] duration metric: took 1m31.468035985s to provisionDockerMachine
	I1007 10:57:02.380830   29447 start.go:293] postStartSetup for "ha-406505" (driver="kvm2")
	I1007 10:57:02.380850   29447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:57:02.380876   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.381188   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:57:02.381220   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.384384   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.384896   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.384926   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.385018   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.385183   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.385347   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.385473   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.471888   29447 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:57:02.476934   29447 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:57:02.476965   29447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:57:02.477032   29447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:57:02.477129   29447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:57:02.477144   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:57:02.477256   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:57:02.487344   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:57:02.513177   29447 start.go:296] duration metric: took 132.325528ms for postStartSetup
	I1007 10:57:02.513227   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.513496   29447 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 10:57:02.513521   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.516263   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.516783   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.516813   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.516980   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.517176   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.517396   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.517564   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	W1007 10:57:02.602805   29447 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1007 10:57:02.602831   29447 fix.go:56] duration metric: took 1m31.713158307s for fixHost
	I1007 10:57:02.602856   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.605787   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.606125   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.606153   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.606373   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.606599   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.606770   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.606900   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.607063   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:57:02.607214   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:57:02.607225   29447 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:57:02.716959   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298622.682212120
	
	I1007 10:57:02.716987   29447 fix.go:216] guest clock: 1728298622.682212120
	I1007 10:57:02.716996   29447 fix.go:229] Guest: 2024-10-07 10:57:02.68221212 +0000 UTC Remote: 2024-10-07 10:57:02.602839413 +0000 UTC m=+91.848037136 (delta=79.372707ms)
	I1007 10:57:02.717030   29447 fix.go:200] guest clock delta is within tolerance: 79.372707ms
	I1007 10:57:02.717039   29447 start.go:83] releasing machines lock for "ha-406505", held for 1m31.827376309s
	I1007 10:57:02.717068   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.717326   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:57:02.719717   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.720045   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.720070   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.720179   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720690   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720867   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720951   29447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:57:02.721002   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.721045   29447 ssh_runner.go:195] Run: cat /version.json
	I1007 10:57:02.721066   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.723380   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723574   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723766   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.723798   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723929   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.724086   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.724104   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.724106   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.724245   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.724286   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.724375   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.724386   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.724493   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.724605   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.828482   29447 ssh_runner.go:195] Run: systemctl --version
	I1007 10:57:02.834933   29447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:57:02.995415   29447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:57:03.004313   29447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:57:03.004375   29447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:57:03.014071   29447 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 10:57:03.014098   29447 start.go:495] detecting cgroup driver to use...
	I1007 10:57:03.014160   29447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:57:03.031548   29447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:57:03.045665   29447 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:57:03.045720   29447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:57:03.060885   29447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:57:03.075305   29447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:57:03.229941   29447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:57:03.380003   29447 docker.go:233] disabling docker service ...
	I1007 10:57:03.380072   29447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:57:03.397931   29447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:57:03.412383   29447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:57:03.567900   29447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:57:03.721366   29447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:57:03.737163   29447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:57:03.756494   29447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:57:03.756570   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.767799   29447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:57:03.767866   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.778739   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.789495   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.800585   29447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:57:03.813221   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.824053   29447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.835220   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.845426   29447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:57:03.854894   29447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:57:03.864074   29447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:57:04.020012   29447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:57:04.256195   29447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:57:04.256262   29447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:57:04.261541   29447 start.go:563] Will wait 60s for crictl version
	I1007 10:57:04.261605   29447 ssh_runner.go:195] Run: which crictl
	I1007 10:57:04.266424   29447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:57:04.306687   29447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:57:04.306770   29447 ssh_runner.go:195] Run: crio --version
	I1007 10:57:04.342644   29447 ssh_runner.go:195] Run: crio --version
	I1007 10:57:04.376624   29447 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:57:04.378190   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:57:04.381211   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:04.381557   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:04.381578   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:04.381799   29447 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:57:04.386556   29447 kubeadm.go:883] updating cluster {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:57:04.386679   29447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:57:04.386728   29447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:57:04.431534   29447 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:57:04.431564   29447 crio.go:433] Images already preloaded, skipping extraction
	I1007 10:57:04.431618   29447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:57:04.471722   29447 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:57:04.471751   29447 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:57:04.471764   29447 kubeadm.go:934] updating node { 192.168.39.250 8443 v1.31.1 crio true true} ...
	I1007 10:57:04.471889   29447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:57:04.471959   29447 ssh_runner.go:195] Run: crio config
	I1007 10:57:04.525534   29447 cni.go:84] Creating CNI manager for ""
	I1007 10:57:04.525555   29447 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 10:57:04.525564   29447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:57:04.525581   29447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406505 NodeName:ha-406505 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:57:04.525698   29447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406505"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:57:04.525716   29447 kube-vip.go:115] generating kube-vip config ...
	I1007 10:57:04.525751   29447 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:57:04.537676   29447 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:57:04.537777   29447 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:57:04.537841   29447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:57:04.547556   29447 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:57:04.547619   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 10:57:04.557240   29447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 10:57:04.575646   29447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:57:04.593225   29447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 10:57:04.611864   29447 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:57:04.630249   29447 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:57:04.634460   29447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:57:04.779449   29447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:57:04.794566   29447 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.250
	I1007 10:57:04.794589   29447 certs.go:194] generating shared ca certs ...
	I1007 10:57:04.794603   29447 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:04.794760   29447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:57:04.794902   29447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:57:04.794927   29447 certs.go:256] generating profile certs ...
	I1007 10:57:04.795030   29447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:57:04.795066   29447 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376
	I1007 10:57:04.795083   29447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.102 192.168.39.254]
	I1007 10:57:05.108330   29447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 ...
	I1007 10:57:05.108361   29447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376: {Name:mk04adcfb95e9408df73c49cc28f69521efd4eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:05.108524   29447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376 ...
	I1007 10:57:05.108541   29447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376: {Name:mk08d01b1655950dbc2445f79f2d8bdc29563add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:05.108614   29447 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:57:05.108753   29447 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:57:05.108875   29447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:57:05.108890   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:57:05.108904   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:57:05.108914   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:57:05.108926   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:57:05.108938   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:57:05.108949   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:57:05.108961   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:57:05.108973   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:57:05.109020   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:57:05.109055   29447 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:57:05.109066   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:57:05.109091   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:57:05.109135   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:57:05.109164   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:57:05.109202   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:57:05.109238   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:57:05.109251   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:57:05.109262   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:05.109871   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:57:05.360442   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:57:05.605815   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:57:05.850088   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:57:06.219588   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 10:57:06.276707   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:57:06.318692   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:57:06.348933   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:57:06.385454   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:57:06.415472   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:57:06.447267   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:57:06.504935   29447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:57:06.532354   29447 ssh_runner.go:195] Run: openssl version
	I1007 10:57:06.539545   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:57:06.554465   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.560708   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.560773   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.569485   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:57:06.586629   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:57:06.600762   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.608271   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.608356   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.616754   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:57:06.632118   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:57:06.646429   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.655247   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.655315   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.661893   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:57:06.674956   29447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:57:06.682421   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 10:57:06.688720   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 10:57:06.695527   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 10:57:06.702575   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 10:57:06.709386   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 10:57:06.715690   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 10:57:06.724003   29447 kubeadm.go:392] StartCluster: {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:57:06.724168   29447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:57:06.724228   29447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:57:06.792220   29447 cri.go:89] found id: "a64bf2e21e156733427ea0d3a45ec9f23d99632adb4fc9587bd263896cb45c81"
	I1007 10:57:06.792250   29447 cri.go:89] found id: "606ec7a724513e10c9da9a27b0b650b8c529f2df4f1079e79bcb30d4c7839fcf"
	I1007 10:57:06.792256   29447 cri.go:89] found id: "d6f4624f73f68c6b59c63a1c3a5b28b4d748f196ec2bac402e5462f97addeae5"
	I1007 10:57:06.792261   29447 cri.go:89] found id: "630e5de32b697cc2301625c159c7ec527a1d4c719a4018553d5edb345a23ca79"
	I1007 10:57:06.792265   29447 cri.go:89] found id: "54438f91675378609a3f994ca735839da4a4bdd24c088cd3a42b45cdf6008d74"
	I1007 10:57:06.792270   29447 cri.go:89] found id: "815c284d9f8c834cea5412ecc0f136a8219af90faff522693c81431cfcbb170e"
	I1007 10:57:06.792273   29447 cri.go:89] found id: "048e86e40dd08c62b9fed5f84a6d7c6ba376d8e40348f0a461ee4b5ed1eb0c1e"
	I1007 10:57:06.792284   29447 cri.go:89] found id: "55130afb3140b78545837a44e0d1200ed084970a981975f2439a746c1aee5ecd"
	I1007 10:57:06.792289   29447 cri.go:89] found id: "1799fca1e0776626eea0f6a1d7d4e5470021a7a26022e13fbb3dd3fd3a4dff19"
	I1007 10:57:06.792295   29447 cri.go:89] found id: "809bd2a742c43a680efa79ca906fec95b70290a0d3fe3628198ee66abc1da27b"
	I1007 10:57:06.792299   29447 cri.go:89] found id: "46ee0ba8c50585b784c79a0db0e2996a651504cb4a60879c5e7db44d64cd22c6"
	I1007 10:57:06.792303   29447 cri.go:89] found id: "b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12"
	I1007 10:57:06.792308   29447 cri.go:89] found id: "0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136"
	I1007 10:57:06.792312   29447 cri.go:89] found id: "4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec"
	I1007 10:57:06.792320   29447 cri.go:89] found id: "99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff"
	I1007 10:57:06.792324   29447 cri.go:89] found id: "fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887"
	I1007 10:57:06.792328   29447 cri.go:89] found id: "11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b"
	I1007 10:57:06.792333   29447 cri.go:89] found id: "eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750"
	I1007 10:57:06.792337   29447 cri.go:89] found id: ""
	I1007 10:57:06.792388   29447 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-406505 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-406505
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406505 -n ha-406505
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 logs -n 25
E1007 11:06:31.316419   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406505 logs -n 25: (4.822963907s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m04 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp testdata/cp-test.txt                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505:/home/docker/cp-test_ha-406505-m04_ha-406505.txt                       |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505 sudo cat                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505.txt                                 |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03:/home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m03 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-406505 node stop m02 -v=7                                                     | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-406505 node start m02 -v=7                                                    | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-406505 -v=7                                                           | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-406505 -v=7                                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-406505 --wait=true -v=7                                                    | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-406505                                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 11:06 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:55:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:55:30.794033   29447 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:55:30.794343   29447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:55:30.794353   29447 out.go:358] Setting ErrFile to fd 2...
	I1007 10:55:30.794358   29447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:55:30.794636   29447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:55:30.795389   29447 out.go:352] Setting JSON to false
	I1007 10:55:30.796394   29447 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2225,"bootTime":1728296306,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:55:30.796491   29447 start.go:139] virtualization: kvm guest
	I1007 10:55:30.799104   29447 out.go:177] * [ha-406505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:55:30.800626   29447 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:55:30.800640   29447 notify.go:220] Checking for updates...
	I1007 10:55:30.803410   29447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:55:30.804914   29447 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:55:30.806210   29447 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:55:30.807469   29447 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:55:30.808873   29447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:55:30.810633   29447 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:55:30.810741   29447 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:55:30.811301   29447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:55:30.811382   29447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:55:30.827997   29447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I1007 10:55:30.828419   29447 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:55:30.828927   29447 main.go:141] libmachine: Using API Version  1
	I1007 10:55:30.828950   29447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:55:30.829275   29447 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:55:30.829462   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.866638   29447 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 10:55:30.867832   29447 start.go:297] selected driver: kvm2
	I1007 10:55:30.867847   29447 start.go:901] validating driver "kvm2" against &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:55:30.867993   29447 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:55:30.868324   29447 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:55:30.868393   29447 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:55:30.883607   29447 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:55:30.884398   29447 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:55:30.884430   29447 cni.go:84] Creating CNI manager for ""
	I1007 10:55:30.884477   29447 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 10:55:30.884532   29447 start.go:340] cluster config:
	{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:55:30.884667   29447 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:55:30.887625   29447 out.go:177] * Starting "ha-406505" primary control-plane node in "ha-406505" cluster
	I1007 10:55:30.889130   29447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:55:30.889173   29447 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 10:55:30.889180   29447 cache.go:56] Caching tarball of preloaded images
	I1007 10:55:30.889265   29447 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:55:30.889276   29447 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:55:30.889406   29447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:55:30.889609   29447 start.go:360] acquireMachinesLock for ha-406505: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:55:30.889652   29447 start.go:364] duration metric: took 24.494µs to acquireMachinesLock for "ha-406505"
	I1007 10:55:30.889665   29447 start.go:96] Skipping create...Using existing machine configuration
	I1007 10:55:30.889672   29447 fix.go:54] fixHost starting: 
	I1007 10:55:30.889919   29447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:55:30.889956   29447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:55:30.905409   29447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1007 10:55:30.905796   29447 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:55:30.906241   29447 main.go:141] libmachine: Using API Version  1
	I1007 10:55:30.906267   29447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:55:30.906599   29447 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:55:30.906789   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.906907   29447 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:55:30.908591   29447 fix.go:112] recreateIfNeeded on ha-406505: state=Running err=<nil>
	W1007 10:55:30.908611   29447 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 10:55:30.911510   29447 out.go:177] * Updating the running kvm2 "ha-406505" VM ...
	I1007 10:55:30.912725   29447 machine.go:93] provisionDockerMachine start ...
	I1007 10:55:30.912748   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.913010   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:30.915628   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:30.916120   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:30.916146   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:30.916330   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:30.916511   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:30.916680   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:30.916822   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:30.916955   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:30.917153   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:30.917166   29447 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 10:55:31.033780   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:55:31.033807   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.034055   29447 buildroot.go:166] provisioning hostname "ha-406505"
	I1007 10:55:31.034084   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.034284   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.036957   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.037413   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.037434   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.037635   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.037817   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.037986   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.038124   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.038289   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.038459   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.038471   29447 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505 && echo "ha-406505" | sudo tee /etc/hostname
	I1007 10:55:31.163165   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:55:31.163191   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.165768   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.166076   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.166103   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.166240   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.166482   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.166659   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.166867   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.167037   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.167200   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.167215   29447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:55:31.281078   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:55:31.281115   29447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:55:31.281162   29447 buildroot.go:174] setting up certificates
	I1007 10:55:31.281174   29447 provision.go:84] configureAuth start
	I1007 10:55:31.281188   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.281444   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:55:31.283970   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.284388   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.284407   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.284595   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.287215   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.287589   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.287607   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.287775   29447 provision.go:143] copyHostCerts
	I1007 10:55:31.287819   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:55:31.287852   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:55:31.287869   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:55:31.287940   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:55:31.288067   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:55:31.288094   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:55:31.288104   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:55:31.288150   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:55:31.288213   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:55:31.288231   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:55:31.288238   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:55:31.288273   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:55:31.288330   29447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505 san=[127.0.0.1 192.168.39.250 ha-406505 localhost minikube]
	I1007 10:55:31.355824   29447 provision.go:177] copyRemoteCerts
	I1007 10:55:31.355877   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:55:31.355903   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.358704   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.359013   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.359045   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.359197   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.359373   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.359532   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.359697   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:55:31.447226   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:55:31.447288   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:55:31.474841   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:55:31.474941   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 10:55:31.503482   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:55:31.503562   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:55:31.530808   29447 provision.go:87] duration metric: took 249.62125ms to configureAuth
	I1007 10:55:31.530835   29447 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:55:31.531044   29447 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:55:31.531130   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.534412   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.534867   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.534899   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.535087   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.535266   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.535472   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.535637   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.535791   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.535959   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.536003   29447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:57:02.380736   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:57:02.380814   29447 machine.go:96] duration metric: took 1m31.468035985s to provisionDockerMachine
	I1007 10:57:02.380830   29447 start.go:293] postStartSetup for "ha-406505" (driver="kvm2")
	I1007 10:57:02.380850   29447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:57:02.380876   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.381188   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:57:02.381220   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.384384   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.384896   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.384926   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.385018   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.385183   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.385347   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.385473   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.471888   29447 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:57:02.476934   29447 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:57:02.476965   29447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:57:02.477032   29447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:57:02.477129   29447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:57:02.477144   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:57:02.477256   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:57:02.487344   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:57:02.513177   29447 start.go:296] duration metric: took 132.325528ms for postStartSetup
	I1007 10:57:02.513227   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.513496   29447 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 10:57:02.513521   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.516263   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.516783   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.516813   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.516980   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.517176   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.517396   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.517564   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	W1007 10:57:02.602805   29447 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1007 10:57:02.602831   29447 fix.go:56] duration metric: took 1m31.713158307s for fixHost
	I1007 10:57:02.602856   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.605787   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.606125   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.606153   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.606373   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.606599   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.606770   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.606900   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.607063   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:57:02.607214   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:57:02.607225   29447 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:57:02.716959   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298622.682212120
	
	I1007 10:57:02.716987   29447 fix.go:216] guest clock: 1728298622.682212120
	I1007 10:57:02.716996   29447 fix.go:229] Guest: 2024-10-07 10:57:02.68221212 +0000 UTC Remote: 2024-10-07 10:57:02.602839413 +0000 UTC m=+91.848037136 (delta=79.372707ms)
	I1007 10:57:02.717030   29447 fix.go:200] guest clock delta is within tolerance: 79.372707ms
	I1007 10:57:02.717039   29447 start.go:83] releasing machines lock for "ha-406505", held for 1m31.827376309s
	I1007 10:57:02.717068   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.717326   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:57:02.719717   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.720045   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.720070   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.720179   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720690   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720867   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720951   29447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:57:02.721002   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.721045   29447 ssh_runner.go:195] Run: cat /version.json
	I1007 10:57:02.721066   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.723380   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723574   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723766   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.723798   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723929   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.724086   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.724104   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.724106   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.724245   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.724286   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.724375   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.724386   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.724493   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.724605   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.828482   29447 ssh_runner.go:195] Run: systemctl --version
	I1007 10:57:02.834933   29447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:57:02.995415   29447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:57:03.004313   29447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:57:03.004375   29447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:57:03.014071   29447 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 10:57:03.014098   29447 start.go:495] detecting cgroup driver to use...
	I1007 10:57:03.014160   29447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:57:03.031548   29447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:57:03.045665   29447 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:57:03.045720   29447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:57:03.060885   29447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:57:03.075305   29447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:57:03.229941   29447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:57:03.380003   29447 docker.go:233] disabling docker service ...
	I1007 10:57:03.380072   29447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:57:03.397931   29447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:57:03.412383   29447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:57:03.567900   29447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:57:03.721366   29447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:57:03.737163   29447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:57:03.756494   29447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:57:03.756570   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.767799   29447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:57:03.767866   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.778739   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.789495   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.800585   29447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:57:03.813221   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.824053   29447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.835220   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.845426   29447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:57:03.854894   29447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:57:03.864074   29447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:57:04.020012   29447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:57:04.256195   29447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:57:04.256262   29447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:57:04.261541   29447 start.go:563] Will wait 60s for crictl version
	I1007 10:57:04.261605   29447 ssh_runner.go:195] Run: which crictl
	I1007 10:57:04.266424   29447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:57:04.306687   29447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:57:04.306770   29447 ssh_runner.go:195] Run: crio --version
	I1007 10:57:04.342644   29447 ssh_runner.go:195] Run: crio --version
	I1007 10:57:04.376624   29447 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:57:04.378190   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:57:04.381211   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:04.381557   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:04.381578   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:04.381799   29447 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:57:04.386556   29447 kubeadm.go:883] updating cluster {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:57:04.386679   29447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:57:04.386728   29447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:57:04.431534   29447 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:57:04.431564   29447 crio.go:433] Images already preloaded, skipping extraction
	I1007 10:57:04.431618   29447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:57:04.471722   29447 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:57:04.471751   29447 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:57:04.471764   29447 kubeadm.go:934] updating node { 192.168.39.250 8443 v1.31.1 crio true true} ...
	I1007 10:57:04.471889   29447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:57:04.471959   29447 ssh_runner.go:195] Run: crio config
	I1007 10:57:04.525534   29447 cni.go:84] Creating CNI manager for ""
	I1007 10:57:04.525555   29447 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 10:57:04.525564   29447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:57:04.525581   29447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406505 NodeName:ha-406505 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:57:04.525698   29447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406505"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:57:04.525716   29447 kube-vip.go:115] generating kube-vip config ...
	I1007 10:57:04.525751   29447 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:57:04.537676   29447 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:57:04.537777   29447 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:57:04.537841   29447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:57:04.547556   29447 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:57:04.547619   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 10:57:04.557240   29447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 10:57:04.575646   29447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:57:04.593225   29447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 10:57:04.611864   29447 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:57:04.630249   29447 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:57:04.634460   29447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:57:04.779449   29447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:57:04.794566   29447 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.250
	I1007 10:57:04.794589   29447 certs.go:194] generating shared ca certs ...
	I1007 10:57:04.794603   29447 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:04.794760   29447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:57:04.794902   29447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:57:04.794927   29447 certs.go:256] generating profile certs ...
	I1007 10:57:04.795030   29447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:57:04.795066   29447 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376
	I1007 10:57:04.795083   29447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.102 192.168.39.254]
	I1007 10:57:05.108330   29447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 ...
	I1007 10:57:05.108361   29447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376: {Name:mk04adcfb95e9408df73c49cc28f69521efd4eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:05.108524   29447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376 ...
	I1007 10:57:05.108541   29447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376: {Name:mk08d01b1655950dbc2445f79f2d8bdc29563add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:05.108614   29447 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:57:05.108753   29447 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:57:05.108875   29447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:57:05.108890   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:57:05.108904   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:57:05.108914   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:57:05.108926   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:57:05.108938   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:57:05.108949   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:57:05.108961   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:57:05.108973   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:57:05.109020   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:57:05.109055   29447 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:57:05.109066   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:57:05.109091   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:57:05.109135   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:57:05.109164   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:57:05.109202   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:57:05.109238   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:57:05.109251   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:57:05.109262   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:05.109871   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:57:05.360442   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:57:05.605815   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:57:05.850088   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:57:06.219588   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 10:57:06.276707   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:57:06.318692   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:57:06.348933   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:57:06.385454   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:57:06.415472   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:57:06.447267   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:57:06.504935   29447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:57:06.532354   29447 ssh_runner.go:195] Run: openssl version
	I1007 10:57:06.539545   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:57:06.554465   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.560708   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.560773   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.569485   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:57:06.586629   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:57:06.600762   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.608271   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.608356   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.616754   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:57:06.632118   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:57:06.646429   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.655247   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.655315   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.661893   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:57:06.674956   29447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:57:06.682421   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 10:57:06.688720   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 10:57:06.695527   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 10:57:06.702575   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 10:57:06.709386   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 10:57:06.715690   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 10:57:06.724003   29447 kubeadm.go:392] StartCluster: {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:57:06.724168   29447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:57:06.724228   29447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:57:06.792220   29447 cri.go:89] found id: "a64bf2e21e156733427ea0d3a45ec9f23d99632adb4fc9587bd263896cb45c81"
	I1007 10:57:06.792250   29447 cri.go:89] found id: "606ec7a724513e10c9da9a27b0b650b8c529f2df4f1079e79bcb30d4c7839fcf"
	I1007 10:57:06.792256   29447 cri.go:89] found id: "d6f4624f73f68c6b59c63a1c3a5b28b4d748f196ec2bac402e5462f97addeae5"
	I1007 10:57:06.792261   29447 cri.go:89] found id: "630e5de32b697cc2301625c159c7ec527a1d4c719a4018553d5edb345a23ca79"
	I1007 10:57:06.792265   29447 cri.go:89] found id: "54438f91675378609a3f994ca735839da4a4bdd24c088cd3a42b45cdf6008d74"
	I1007 10:57:06.792270   29447 cri.go:89] found id: "815c284d9f8c834cea5412ecc0f136a8219af90faff522693c81431cfcbb170e"
	I1007 10:57:06.792273   29447 cri.go:89] found id: "048e86e40dd08c62b9fed5f84a6d7c6ba376d8e40348f0a461ee4b5ed1eb0c1e"
	I1007 10:57:06.792284   29447 cri.go:89] found id: "55130afb3140b78545837a44e0d1200ed084970a981975f2439a746c1aee5ecd"
	I1007 10:57:06.792289   29447 cri.go:89] found id: "1799fca1e0776626eea0f6a1d7d4e5470021a7a26022e13fbb3dd3fd3a4dff19"
	I1007 10:57:06.792295   29447 cri.go:89] found id: "809bd2a742c43a680efa79ca906fec95b70290a0d3fe3628198ee66abc1da27b"
	I1007 10:57:06.792299   29447 cri.go:89] found id: "46ee0ba8c50585b784c79a0db0e2996a651504cb4a60879c5e7db44d64cd22c6"
	I1007 10:57:06.792303   29447 cri.go:89] found id: "b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12"
	I1007 10:57:06.792308   29447 cri.go:89] found id: "0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136"
	I1007 10:57:06.792312   29447 cri.go:89] found id: "4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec"
	I1007 10:57:06.792320   29447 cri.go:89] found id: "99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff"
	I1007 10:57:06.792324   29447 cri.go:89] found id: "fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887"
	I1007 10:57:06.792328   29447 cri.go:89] found id: "11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b"
	I1007 10:57:06.792333   29447 cri.go:89] found id: "eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750"
	I1007 10:57:06.792337   29447 cri.go:89] found id: ""
	I1007 10:57:06.792388   29447 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406505 -n ha-406505
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406505 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: etcd-ha-406505-m03 kube-apiserver-ha-406505-m03 kube-controller-manager-ha-406505-m03 kube-scheduler-ha-406505-m03 kube-vip-ha-406505-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406505 describe pod etcd-ha-406505-m03 kube-apiserver-ha-406505-m03 kube-controller-manager-ha-406505-m03 kube-scheduler-ha-406505-m03 kube-vip-ha-406505-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-406505 describe pod etcd-ha-406505-m03 kube-apiserver-ha-406505-m03 kube-controller-manager-ha-406505-m03 kube-scheduler-ha-406505-m03 kube-vip-ha-406505-m03: exit status 1 (68.242399ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "etcd-ha-406505-m03" not found
	Error from server (NotFound): pods "kube-apiserver-ha-406505-m03" not found
	Error from server (NotFound): pods "kube-controller-manager-ha-406505-m03" not found
	Error from server (NotFound): pods "kube-scheduler-ha-406505-m03" not found
	Error from server (NotFound): pods "kube-vip-ha-406505-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-406505 describe pod etcd-ha-406505-m03 kube-apiserver-ha-406505-m03 kube-controller-manager-ha-406505-m03 kube-scheduler-ha-406505-m03 kube-vip-ha-406505-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (783.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-406505 node delete m03 -v=7 --alsologtostderr: (5.150542042s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr: exit status 7 (483.621134ms)

                                                
                                                
-- stdout --
	ha-406505
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406505-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-406505-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 11:06:37.331099   32279 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:06:37.331206   32279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:06:37.331214   32279 out.go:358] Setting ErrFile to fd 2...
	I1007 11:06:37.331218   32279 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:06:37.331423   32279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 11:06:37.331589   32279 out.go:352] Setting JSON to false
	I1007 11:06:37.331611   32279 mustload.go:65] Loading cluster: ha-406505
	I1007 11:06:37.331708   32279 notify.go:220] Checking for updates...
	I1007 11:06:37.332004   32279 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:06:37.332031   32279 status.go:174] checking status of ha-406505 ...
	I1007 11:06:37.332526   32279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:06:37.332573   32279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:06:37.347315   32279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38745
	I1007 11:06:37.347808   32279 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:06:37.348420   32279 main.go:141] libmachine: Using API Version  1
	I1007 11:06:37.348436   32279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:06:37.348827   32279 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:06:37.349061   32279 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 11:06:37.350791   32279 status.go:371] ha-406505 host status = "Running" (err=<nil>)
	I1007 11:06:37.350810   32279 host.go:66] Checking if "ha-406505" exists ...
	I1007 11:06:37.351075   32279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:06:37.351116   32279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:06:37.366055   32279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I1007 11:06:37.366433   32279 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:06:37.366875   32279 main.go:141] libmachine: Using API Version  1
	I1007 11:06:37.366904   32279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:06:37.367258   32279 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:06:37.367410   32279 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 11:06:37.370203   32279 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 11:06:37.370655   32279 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 11:06:37.370679   32279 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 11:06:37.370838   32279 host.go:66] Checking if "ha-406505" exists ...
	I1007 11:06:37.371138   32279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:06:37.371178   32279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:06:37.385648   32279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46787
	I1007 11:06:37.386066   32279 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:06:37.386464   32279 main.go:141] libmachine: Using API Version  1
	I1007 11:06:37.386493   32279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:06:37.386806   32279 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:06:37.386965   32279 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 11:06:37.387134   32279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:06:37.387164   32279 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 11:06:37.390340   32279 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 11:06:37.390832   32279 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 11:06:37.390849   32279 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 11:06:37.391004   32279 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 11:06:37.391165   32279 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 11:06:37.391315   32279 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 11:06:37.391411   32279 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 11:06:37.476989   32279 ssh_runner.go:195] Run: systemctl --version
	I1007 11:06:37.484098   32279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:06:37.504696   32279 kubeconfig.go:125] found "ha-406505" server: "https://192.168.39.254:8443"
	I1007 11:06:37.504740   32279 api_server.go:166] Checking apiserver status ...
	I1007 11:06:37.504782   32279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 11:06:37.521689   32279 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5062/cgroup
	W1007 11:06:37.532667   32279 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5062/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1007 11:06:37.532720   32279 ssh_runner.go:195] Run: ls
	I1007 11:06:37.538574   32279 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1007 11:06:37.543029   32279 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1007 11:06:37.543050   32279 status.go:463] ha-406505 apiserver status = Running (err=<nil>)
	I1007 11:06:37.543059   32279 status.go:176] ha-406505 status: &{Name:ha-406505 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 11:06:37.543088   32279 status.go:174] checking status of ha-406505-m02 ...
	I1007 11:06:37.543406   32279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:06:37.543471   32279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:06:37.559638   32279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41327
	I1007 11:06:37.560163   32279 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:06:37.560792   32279 main.go:141] libmachine: Using API Version  1
	I1007 11:06:37.560816   32279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:06:37.561203   32279 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:06:37.561420   32279 main.go:141] libmachine: (ha-406505-m02) Calling .GetState
	I1007 11:06:37.563178   32279 status.go:371] ha-406505-m02 host status = "Running" (err=<nil>)
	I1007 11:06:37.563195   32279 host.go:66] Checking if "ha-406505-m02" exists ...
	I1007 11:06:37.563514   32279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:06:37.563557   32279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:06:37.578810   32279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I1007 11:06:37.579314   32279 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:06:37.579830   32279 main.go:141] libmachine: Using API Version  1
	I1007 11:06:37.579854   32279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:06:37.580183   32279 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:06:37.580381   32279 main.go:141] libmachine: (ha-406505-m02) Calling .GetIP
	I1007 11:06:37.583286   32279 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 11:06:37.583699   32279 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:57:18 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 11:06:37.583724   32279 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 11:06:37.583853   32279 host.go:66] Checking if "ha-406505-m02" exists ...
	I1007 11:06:37.584229   32279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:06:37.584279   32279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:06:37.599933   32279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41883
	I1007 11:06:37.600434   32279 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:06:37.600939   32279 main.go:141] libmachine: Using API Version  1
	I1007 11:06:37.600966   32279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:06:37.601327   32279 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:06:37.601500   32279 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 11:06:37.601686   32279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:06:37.601717   32279 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 11:06:37.605083   32279 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 11:06:37.605604   32279 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:57:18 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 11:06:37.605630   32279 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 11:06:37.605852   32279 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 11:06:37.606028   32279 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 11:06:37.606219   32279 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 11:06:37.606360   32279 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 11:06:37.688864   32279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:06:37.710226   32279 kubeconfig.go:125] found "ha-406505" server: "https://192.168.39.254:8443"
	I1007 11:06:37.710254   32279 api_server.go:166] Checking apiserver status ...
	I1007 11:06:37.710285   32279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 11:06:37.726184   32279 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup
	W1007 11:06:37.738170   32279 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1007 11:06:37.738219   32279 ssh_runner.go:195] Run: ls
	I1007 11:06:37.743373   32279 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1007 11:06:37.747846   32279 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1007 11:06:37.747870   32279 status.go:463] ha-406505-m02 apiserver status = Running (err=<nil>)
	I1007 11:06:37.747879   32279 status.go:176] ha-406505-m02 status: &{Name:ha-406505-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 11:06:37.747897   32279 status.go:174] checking status of ha-406505-m04 ...
	I1007 11:06:37.748288   32279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:06:37.748338   32279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:06:37.763427   32279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I1007 11:06:37.763929   32279 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:06:37.764424   32279 main.go:141] libmachine: Using API Version  1
	I1007 11:06:37.764447   32279 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:06:37.764814   32279 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:06:37.764997   32279 main.go:141] libmachine: (ha-406505-m04) Calling .GetState
	I1007 11:06:37.766563   32279 status.go:371] ha-406505-m04 host status = "Stopped" (err=<nil>)
	I1007 11:06:37.766576   32279 status.go:384] host is not running, skipping remaining checks
	I1007 11:06:37.766581   32279 status.go:176] ha-406505-m04 status: &{Name:ha-406505-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406505 -n ha-406505
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406505 logs -n 25: (4.607317023s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m04 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp testdata/cp-test.txt                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505:/home/docker/cp-test_ha-406505-m04_ha-406505.txt                       |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505 sudo cat                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505.txt                                 |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03:/home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m03 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-406505 node stop m02 -v=7                                                     | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-406505 node start m02 -v=7                                                    | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-406505 -v=7                                                           | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-406505 -v=7                                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-406505 --wait=true -v=7                                                    | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-406505                                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 11:06 UTC |                     |
	| node    | ha-406505 node delete m03 -v=7                                                   | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 11:06 UTC | 07 Oct 24 11:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:55:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:55:30.794033   29447 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:55:30.794343   29447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:55:30.794353   29447 out.go:358] Setting ErrFile to fd 2...
	I1007 10:55:30.794358   29447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:55:30.794636   29447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:55:30.795389   29447 out.go:352] Setting JSON to false
	I1007 10:55:30.796394   29447 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2225,"bootTime":1728296306,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:55:30.796491   29447 start.go:139] virtualization: kvm guest
	I1007 10:55:30.799104   29447 out.go:177] * [ha-406505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:55:30.800626   29447 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:55:30.800640   29447 notify.go:220] Checking for updates...
	I1007 10:55:30.803410   29447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:55:30.804914   29447 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:55:30.806210   29447 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:55:30.807469   29447 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:55:30.808873   29447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:55:30.810633   29447 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:55:30.810741   29447 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:55:30.811301   29447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:55:30.811382   29447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:55:30.827997   29447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I1007 10:55:30.828419   29447 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:55:30.828927   29447 main.go:141] libmachine: Using API Version  1
	I1007 10:55:30.828950   29447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:55:30.829275   29447 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:55:30.829462   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.866638   29447 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 10:55:30.867832   29447 start.go:297] selected driver: kvm2
	I1007 10:55:30.867847   29447 start.go:901] validating driver "kvm2" against &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:55:30.867993   29447 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:55:30.868324   29447 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:55:30.868393   29447 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:55:30.883607   29447 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:55:30.884398   29447 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:55:30.884430   29447 cni.go:84] Creating CNI manager for ""
	I1007 10:55:30.884477   29447 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 10:55:30.884532   29447 start.go:340] cluster config:
	{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:55:30.884667   29447 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:55:30.887625   29447 out.go:177] * Starting "ha-406505" primary control-plane node in "ha-406505" cluster
	I1007 10:55:30.889130   29447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:55:30.889173   29447 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 10:55:30.889180   29447 cache.go:56] Caching tarball of preloaded images
	I1007 10:55:30.889265   29447 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:55:30.889276   29447 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:55:30.889406   29447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:55:30.889609   29447 start.go:360] acquireMachinesLock for ha-406505: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:55:30.889652   29447 start.go:364] duration metric: took 24.494µs to acquireMachinesLock for "ha-406505"
	I1007 10:55:30.889665   29447 start.go:96] Skipping create...Using existing machine configuration
	I1007 10:55:30.889672   29447 fix.go:54] fixHost starting: 
	I1007 10:55:30.889919   29447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:55:30.889956   29447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:55:30.905409   29447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1007 10:55:30.905796   29447 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:55:30.906241   29447 main.go:141] libmachine: Using API Version  1
	I1007 10:55:30.906267   29447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:55:30.906599   29447 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:55:30.906789   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.906907   29447 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:55:30.908591   29447 fix.go:112] recreateIfNeeded on ha-406505: state=Running err=<nil>
	W1007 10:55:30.908611   29447 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 10:55:30.911510   29447 out.go:177] * Updating the running kvm2 "ha-406505" VM ...
	I1007 10:55:30.912725   29447 machine.go:93] provisionDockerMachine start ...
	I1007 10:55:30.912748   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.913010   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:30.915628   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:30.916120   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:30.916146   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:30.916330   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:30.916511   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:30.916680   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:30.916822   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:30.916955   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:30.917153   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:30.917166   29447 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 10:55:31.033780   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:55:31.033807   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.034055   29447 buildroot.go:166] provisioning hostname "ha-406505"
	I1007 10:55:31.034084   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.034284   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.036957   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.037413   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.037434   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.037635   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.037817   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.037986   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.038124   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.038289   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.038459   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.038471   29447 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505 && echo "ha-406505" | sudo tee /etc/hostname
	I1007 10:55:31.163165   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:55:31.163191   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.165768   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.166076   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.166103   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.166240   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.166482   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.166659   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.166867   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.167037   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.167200   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.167215   29447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:55:31.281078   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:55:31.281115   29447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:55:31.281162   29447 buildroot.go:174] setting up certificates
	I1007 10:55:31.281174   29447 provision.go:84] configureAuth start
	I1007 10:55:31.281188   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.281444   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:55:31.283970   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.284388   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.284407   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.284595   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.287215   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.287589   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.287607   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.287775   29447 provision.go:143] copyHostCerts
	I1007 10:55:31.287819   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:55:31.287852   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:55:31.287869   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:55:31.287940   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:55:31.288067   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:55:31.288094   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:55:31.288104   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:55:31.288150   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:55:31.288213   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:55:31.288231   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:55:31.288238   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:55:31.288273   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:55:31.288330   29447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505 san=[127.0.0.1 192.168.39.250 ha-406505 localhost minikube]
	I1007 10:55:31.355824   29447 provision.go:177] copyRemoteCerts
	I1007 10:55:31.355877   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:55:31.355903   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.358704   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.359013   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.359045   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.359197   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.359373   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.359532   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.359697   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:55:31.447226   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:55:31.447288   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:55:31.474841   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:55:31.474941   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 10:55:31.503482   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:55:31.503562   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:55:31.530808   29447 provision.go:87] duration metric: took 249.62125ms to configureAuth
	I1007 10:55:31.530835   29447 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:55:31.531044   29447 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:55:31.531130   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.534412   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.534867   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.534899   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.535087   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.535266   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.535472   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.535637   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.535791   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.535959   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.536003   29447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:57:02.380736   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:57:02.380814   29447 machine.go:96] duration metric: took 1m31.468035985s to provisionDockerMachine
	I1007 10:57:02.380830   29447 start.go:293] postStartSetup for "ha-406505" (driver="kvm2")
	I1007 10:57:02.380850   29447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:57:02.380876   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.381188   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:57:02.381220   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.384384   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.384896   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.384926   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.385018   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.385183   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.385347   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.385473   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.471888   29447 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:57:02.476934   29447 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:57:02.476965   29447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:57:02.477032   29447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:57:02.477129   29447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:57:02.477144   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:57:02.477256   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:57:02.487344   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:57:02.513177   29447 start.go:296] duration metric: took 132.325528ms for postStartSetup
	I1007 10:57:02.513227   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.513496   29447 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 10:57:02.513521   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.516263   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.516783   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.516813   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.516980   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.517176   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.517396   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.517564   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	W1007 10:57:02.602805   29447 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1007 10:57:02.602831   29447 fix.go:56] duration metric: took 1m31.713158307s for fixHost
	I1007 10:57:02.602856   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.605787   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.606125   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.606153   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.606373   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.606599   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.606770   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.606900   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.607063   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:57:02.607214   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:57:02.607225   29447 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:57:02.716959   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298622.682212120
	
	I1007 10:57:02.716987   29447 fix.go:216] guest clock: 1728298622.682212120
	I1007 10:57:02.716996   29447 fix.go:229] Guest: 2024-10-07 10:57:02.68221212 +0000 UTC Remote: 2024-10-07 10:57:02.602839413 +0000 UTC m=+91.848037136 (delta=79.372707ms)
	I1007 10:57:02.717030   29447 fix.go:200] guest clock delta is within tolerance: 79.372707ms
	I1007 10:57:02.717039   29447 start.go:83] releasing machines lock for "ha-406505", held for 1m31.827376309s
	I1007 10:57:02.717068   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.717326   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:57:02.719717   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.720045   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.720070   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.720179   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720690   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720867   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720951   29447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:57:02.721002   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.721045   29447 ssh_runner.go:195] Run: cat /version.json
	I1007 10:57:02.721066   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.723380   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723574   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723766   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.723798   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723929   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.724086   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.724104   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.724106   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.724245   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.724286   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.724375   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.724386   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.724493   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.724605   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.828482   29447 ssh_runner.go:195] Run: systemctl --version
	I1007 10:57:02.834933   29447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:57:02.995415   29447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:57:03.004313   29447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:57:03.004375   29447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:57:03.014071   29447 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 10:57:03.014098   29447 start.go:495] detecting cgroup driver to use...
	I1007 10:57:03.014160   29447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:57:03.031548   29447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:57:03.045665   29447 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:57:03.045720   29447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:57:03.060885   29447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:57:03.075305   29447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:57:03.229941   29447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:57:03.380003   29447 docker.go:233] disabling docker service ...
	I1007 10:57:03.380072   29447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:57:03.397931   29447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:57:03.412383   29447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:57:03.567900   29447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:57:03.721366   29447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:57:03.737163   29447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:57:03.756494   29447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:57:03.756570   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.767799   29447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:57:03.767866   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.778739   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.789495   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.800585   29447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:57:03.813221   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.824053   29447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.835220   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.845426   29447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:57:03.854894   29447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:57:03.864074   29447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:57:04.020012   29447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:57:04.256195   29447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:57:04.256262   29447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:57:04.261541   29447 start.go:563] Will wait 60s for crictl version
	I1007 10:57:04.261605   29447 ssh_runner.go:195] Run: which crictl
	I1007 10:57:04.266424   29447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:57:04.306687   29447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:57:04.306770   29447 ssh_runner.go:195] Run: crio --version
	I1007 10:57:04.342644   29447 ssh_runner.go:195] Run: crio --version
	I1007 10:57:04.376624   29447 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:57:04.378190   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:57:04.381211   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:04.381557   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:04.381578   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:04.381799   29447 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:57:04.386556   29447 kubeadm.go:883] updating cluster {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:57:04.386679   29447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:57:04.386728   29447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:57:04.431534   29447 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:57:04.431564   29447 crio.go:433] Images already preloaded, skipping extraction
	I1007 10:57:04.431618   29447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:57:04.471722   29447 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:57:04.471751   29447 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:57:04.471764   29447 kubeadm.go:934] updating node { 192.168.39.250 8443 v1.31.1 crio true true} ...
	I1007 10:57:04.471889   29447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:57:04.471959   29447 ssh_runner.go:195] Run: crio config
	I1007 10:57:04.525534   29447 cni.go:84] Creating CNI manager for ""
	I1007 10:57:04.525555   29447 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 10:57:04.525564   29447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:57:04.525581   29447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406505 NodeName:ha-406505 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:57:04.525698   29447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406505"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:57:04.525716   29447 kube-vip.go:115] generating kube-vip config ...
	I1007 10:57:04.525751   29447 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:57:04.537676   29447 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:57:04.537777   29447 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:57:04.537841   29447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:57:04.547556   29447 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:57:04.547619   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 10:57:04.557240   29447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 10:57:04.575646   29447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:57:04.593225   29447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 10:57:04.611864   29447 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:57:04.630249   29447 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:57:04.634460   29447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:57:04.779449   29447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:57:04.794566   29447 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.250
	I1007 10:57:04.794589   29447 certs.go:194] generating shared ca certs ...
	I1007 10:57:04.794603   29447 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:04.794760   29447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:57:04.794902   29447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:57:04.794927   29447 certs.go:256] generating profile certs ...
	I1007 10:57:04.795030   29447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:57:04.795066   29447 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376
	I1007 10:57:04.795083   29447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.102 192.168.39.254]
	I1007 10:57:05.108330   29447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 ...
	I1007 10:57:05.108361   29447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376: {Name:mk04adcfb95e9408df73c49cc28f69521efd4eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:05.108524   29447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376 ...
	I1007 10:57:05.108541   29447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376: {Name:mk08d01b1655950dbc2445f79f2d8bdc29563add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:05.108614   29447 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:57:05.108753   29447 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:57:05.108875   29447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:57:05.108890   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:57:05.108904   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:57:05.108914   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:57:05.108926   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:57:05.108938   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:57:05.108949   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:57:05.108961   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:57:05.108973   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:57:05.109020   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:57:05.109055   29447 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:57:05.109066   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:57:05.109091   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:57:05.109135   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:57:05.109164   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:57:05.109202   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:57:05.109238   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:57:05.109251   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:57:05.109262   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:05.109871   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:57:05.360442   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:57:05.605815   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:57:05.850088   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:57:06.219588   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 10:57:06.276707   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:57:06.318692   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:57:06.348933   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:57:06.385454   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:57:06.415472   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:57:06.447267   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:57:06.504935   29447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:57:06.532354   29447 ssh_runner.go:195] Run: openssl version
	I1007 10:57:06.539545   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:57:06.554465   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.560708   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.560773   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.569485   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:57:06.586629   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:57:06.600762   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.608271   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.608356   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.616754   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:57:06.632118   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:57:06.646429   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.655247   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.655315   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.661893   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:57:06.674956   29447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:57:06.682421   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 10:57:06.688720   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 10:57:06.695527   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 10:57:06.702575   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 10:57:06.709386   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 10:57:06.715690   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 10:57:06.724003   29447 kubeadm.go:392] StartCluster: {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:57:06.724168   29447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:57:06.724228   29447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:57:06.792220   29447 cri.go:89] found id: "a64bf2e21e156733427ea0d3a45ec9f23d99632adb4fc9587bd263896cb45c81"
	I1007 10:57:06.792250   29447 cri.go:89] found id: "606ec7a724513e10c9da9a27b0b650b8c529f2df4f1079e79bcb30d4c7839fcf"
	I1007 10:57:06.792256   29447 cri.go:89] found id: "d6f4624f73f68c6b59c63a1c3a5b28b4d748f196ec2bac402e5462f97addeae5"
	I1007 10:57:06.792261   29447 cri.go:89] found id: "630e5de32b697cc2301625c159c7ec527a1d4c719a4018553d5edb345a23ca79"
	I1007 10:57:06.792265   29447 cri.go:89] found id: "54438f91675378609a3f994ca735839da4a4bdd24c088cd3a42b45cdf6008d74"
	I1007 10:57:06.792270   29447 cri.go:89] found id: "815c284d9f8c834cea5412ecc0f136a8219af90faff522693c81431cfcbb170e"
	I1007 10:57:06.792273   29447 cri.go:89] found id: "048e86e40dd08c62b9fed5f84a6d7c6ba376d8e40348f0a461ee4b5ed1eb0c1e"
	I1007 10:57:06.792284   29447 cri.go:89] found id: "55130afb3140b78545837a44e0d1200ed084970a981975f2439a746c1aee5ecd"
	I1007 10:57:06.792289   29447 cri.go:89] found id: "1799fca1e0776626eea0f6a1d7d4e5470021a7a26022e13fbb3dd3fd3a4dff19"
	I1007 10:57:06.792295   29447 cri.go:89] found id: "809bd2a742c43a680efa79ca906fec95b70290a0d3fe3628198ee66abc1da27b"
	I1007 10:57:06.792299   29447 cri.go:89] found id: "46ee0ba8c50585b784c79a0db0e2996a651504cb4a60879c5e7db44d64cd22c6"
	I1007 10:57:06.792303   29447 cri.go:89] found id: "b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12"
	I1007 10:57:06.792308   29447 cri.go:89] found id: "0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136"
	I1007 10:57:06.792312   29447 cri.go:89] found id: "4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec"
	I1007 10:57:06.792320   29447 cri.go:89] found id: "99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff"
	I1007 10:57:06.792324   29447 cri.go:89] found id: "fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887"
	I1007 10:57:06.792328   29447 cri.go:89] found id: "11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b"
	I1007 10:57:06.792333   29447 cri.go:89] found id: "eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750"
	I1007 10:57:06.792337   29447 cri.go:89] found id: ""
	I1007 10:57:06.792388   29447 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406505 -n ha-406505
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406505 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-55rw7 etcd-ha-406505-m03 kube-apiserver-ha-406505-m03 kube-controller-manager-ha-406505-m03 kube-scheduler-ha-406505-m03 kube-vip-ha-406505-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406505 describe pod busybox-7dff88458-55rw7 etcd-ha-406505-m03 kube-apiserver-ha-406505-m03 kube-controller-manager-ha-406505-m03 kube-scheduler-ha-406505-m03 kube-vip-ha-406505-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-406505 describe pod busybox-7dff88458-55rw7 etcd-ha-406505-m03 kube-apiserver-ha-406505-m03 kube-controller-manager-ha-406505-m03 kube-scheduler-ha-406505-m03 kube-vip-ha-406505-m03: exit status 1 (130.545788ms)

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-55rw7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dr6c9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-dr6c9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age               From               Message
	  ----     ------            ----              ----               -------
	  Warning  FailedScheduling  10s               default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  9s (x2 over 11s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "etcd-ha-406505-m03" not found
	Error from server (NotFound): pods "kube-apiserver-ha-406505-m03" not found
	Error from server (NotFound): pods "kube-controller-manager-ha-406505-m03" not found
	Error from server (NotFound): pods "kube-scheduler-ha-406505-m03" not found
	Error from server (NotFound): pods "kube-vip-ha-406505-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-406505 describe pod busybox-7dff88458-55rw7 etcd-ha-406505-m03 kube-apiserver-ha-406505-m03 kube-controller-manager-ha-406505-m03 kube-scheduler-ha-406505-m03 kube-vip-ha-406505-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (10.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (6.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-406505" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-406505\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-406505\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-406505\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.250\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.37\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.2\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,
\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetr
ics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406505 -n ha-406505
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406505 logs -n 25: (4.968791465s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m04 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp testdata/cp-test.txt                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505:/home/docker/cp-test_ha-406505-m04_ha-406505.txt                       |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505 sudo cat                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505.txt                                 |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03:/home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m03 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-406505 node stop m02 -v=7                                                     | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-406505 node start m02 -v=7                                                    | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-406505 -v=7                                                           | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-406505 -v=7                                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-406505 --wait=true -v=7                                                    | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-406505                                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 11:06 UTC |                     |
	| node    | ha-406505 node delete m03 -v=7                                                   | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 11:06 UTC | 07 Oct 24 11:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:55:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:55:30.794033   29447 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:55:30.794343   29447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:55:30.794353   29447 out.go:358] Setting ErrFile to fd 2...
	I1007 10:55:30.794358   29447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:55:30.794636   29447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:55:30.795389   29447 out.go:352] Setting JSON to false
	I1007 10:55:30.796394   29447 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2225,"bootTime":1728296306,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:55:30.796491   29447 start.go:139] virtualization: kvm guest
	I1007 10:55:30.799104   29447 out.go:177] * [ha-406505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:55:30.800626   29447 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:55:30.800640   29447 notify.go:220] Checking for updates...
	I1007 10:55:30.803410   29447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:55:30.804914   29447 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:55:30.806210   29447 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:55:30.807469   29447 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:55:30.808873   29447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:55:30.810633   29447 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:55:30.810741   29447 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:55:30.811301   29447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:55:30.811382   29447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:55:30.827997   29447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I1007 10:55:30.828419   29447 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:55:30.828927   29447 main.go:141] libmachine: Using API Version  1
	I1007 10:55:30.828950   29447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:55:30.829275   29447 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:55:30.829462   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.866638   29447 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 10:55:30.867832   29447 start.go:297] selected driver: kvm2
	I1007 10:55:30.867847   29447 start.go:901] validating driver "kvm2" against &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:55:30.867993   29447 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:55:30.868324   29447 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:55:30.868393   29447 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:55:30.883607   29447 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:55:30.884398   29447 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:55:30.884430   29447 cni.go:84] Creating CNI manager for ""
	I1007 10:55:30.884477   29447 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 10:55:30.884532   29447 start.go:340] cluster config:
	{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:55:30.884667   29447 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:55:30.887625   29447 out.go:177] * Starting "ha-406505" primary control-plane node in "ha-406505" cluster
	I1007 10:55:30.889130   29447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:55:30.889173   29447 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 10:55:30.889180   29447 cache.go:56] Caching tarball of preloaded images
	I1007 10:55:30.889265   29447 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:55:30.889276   29447 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:55:30.889406   29447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:55:30.889609   29447 start.go:360] acquireMachinesLock for ha-406505: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:55:30.889652   29447 start.go:364] duration metric: took 24.494µs to acquireMachinesLock for "ha-406505"
	I1007 10:55:30.889665   29447 start.go:96] Skipping create...Using existing machine configuration
	I1007 10:55:30.889672   29447 fix.go:54] fixHost starting: 
	I1007 10:55:30.889919   29447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:55:30.889956   29447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:55:30.905409   29447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1007 10:55:30.905796   29447 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:55:30.906241   29447 main.go:141] libmachine: Using API Version  1
	I1007 10:55:30.906267   29447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:55:30.906599   29447 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:55:30.906789   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.906907   29447 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:55:30.908591   29447 fix.go:112] recreateIfNeeded on ha-406505: state=Running err=<nil>
	W1007 10:55:30.908611   29447 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 10:55:30.911510   29447 out.go:177] * Updating the running kvm2 "ha-406505" VM ...
	I1007 10:55:30.912725   29447 machine.go:93] provisionDockerMachine start ...
	I1007 10:55:30.912748   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.913010   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:30.915628   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:30.916120   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:30.916146   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:30.916330   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:30.916511   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:30.916680   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:30.916822   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:30.916955   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:30.917153   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:30.917166   29447 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 10:55:31.033780   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:55:31.033807   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.034055   29447 buildroot.go:166] provisioning hostname "ha-406505"
	I1007 10:55:31.034084   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.034284   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.036957   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.037413   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.037434   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.037635   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.037817   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.037986   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.038124   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.038289   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.038459   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.038471   29447 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505 && echo "ha-406505" | sudo tee /etc/hostname
	I1007 10:55:31.163165   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:55:31.163191   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.165768   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.166076   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.166103   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.166240   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.166482   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.166659   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.166867   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.167037   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.167200   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.167215   29447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:55:31.281078   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:55:31.281115   29447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:55:31.281162   29447 buildroot.go:174] setting up certificates
	I1007 10:55:31.281174   29447 provision.go:84] configureAuth start
	I1007 10:55:31.281188   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.281444   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:55:31.283970   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.284388   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.284407   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.284595   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.287215   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.287589   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.287607   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.287775   29447 provision.go:143] copyHostCerts
	I1007 10:55:31.287819   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:55:31.287852   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:55:31.287869   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:55:31.287940   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:55:31.288067   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:55:31.288094   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:55:31.288104   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:55:31.288150   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:55:31.288213   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:55:31.288231   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:55:31.288238   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:55:31.288273   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:55:31.288330   29447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505 san=[127.0.0.1 192.168.39.250 ha-406505 localhost minikube]
	I1007 10:55:31.355824   29447 provision.go:177] copyRemoteCerts
	I1007 10:55:31.355877   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:55:31.355903   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.358704   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.359013   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.359045   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.359197   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.359373   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.359532   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.359697   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:55:31.447226   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:55:31.447288   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:55:31.474841   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:55:31.474941   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 10:55:31.503482   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:55:31.503562   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:55:31.530808   29447 provision.go:87] duration metric: took 249.62125ms to configureAuth
	I1007 10:55:31.530835   29447 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:55:31.531044   29447 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:55:31.531130   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.534412   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.534867   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.534899   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.535087   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.535266   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.535472   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.535637   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.535791   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.535959   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.536003   29447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:57:02.380736   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:57:02.380814   29447 machine.go:96] duration metric: took 1m31.468035985s to provisionDockerMachine
	I1007 10:57:02.380830   29447 start.go:293] postStartSetup for "ha-406505" (driver="kvm2")
	I1007 10:57:02.380850   29447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:57:02.380876   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.381188   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:57:02.381220   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.384384   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.384896   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.384926   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.385018   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.385183   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.385347   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.385473   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.471888   29447 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:57:02.476934   29447 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:57:02.476965   29447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:57:02.477032   29447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:57:02.477129   29447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:57:02.477144   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:57:02.477256   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:57:02.487344   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:57:02.513177   29447 start.go:296] duration metric: took 132.325528ms for postStartSetup
	I1007 10:57:02.513227   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.513496   29447 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 10:57:02.513521   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.516263   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.516783   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.516813   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.516980   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.517176   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.517396   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.517564   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	W1007 10:57:02.602805   29447 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1007 10:57:02.602831   29447 fix.go:56] duration metric: took 1m31.713158307s for fixHost
	I1007 10:57:02.602856   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.605787   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.606125   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.606153   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.606373   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.606599   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.606770   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.606900   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.607063   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:57:02.607214   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:57:02.607225   29447 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:57:02.716959   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298622.682212120
	
	I1007 10:57:02.716987   29447 fix.go:216] guest clock: 1728298622.682212120
	I1007 10:57:02.716996   29447 fix.go:229] Guest: 2024-10-07 10:57:02.68221212 +0000 UTC Remote: 2024-10-07 10:57:02.602839413 +0000 UTC m=+91.848037136 (delta=79.372707ms)
	I1007 10:57:02.717030   29447 fix.go:200] guest clock delta is within tolerance: 79.372707ms
	I1007 10:57:02.717039   29447 start.go:83] releasing machines lock for "ha-406505", held for 1m31.827376309s
	I1007 10:57:02.717068   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.717326   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:57:02.719717   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.720045   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.720070   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.720179   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720690   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720867   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720951   29447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:57:02.721002   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.721045   29447 ssh_runner.go:195] Run: cat /version.json
	I1007 10:57:02.721066   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.723380   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723574   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723766   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.723798   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723929   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.724086   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.724104   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.724106   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.724245   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.724286   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.724375   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.724386   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.724493   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.724605   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.828482   29447 ssh_runner.go:195] Run: systemctl --version
	I1007 10:57:02.834933   29447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:57:02.995415   29447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:57:03.004313   29447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:57:03.004375   29447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:57:03.014071   29447 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 10:57:03.014098   29447 start.go:495] detecting cgroup driver to use...
	I1007 10:57:03.014160   29447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:57:03.031548   29447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:57:03.045665   29447 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:57:03.045720   29447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:57:03.060885   29447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:57:03.075305   29447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:57:03.229941   29447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:57:03.380003   29447 docker.go:233] disabling docker service ...
	I1007 10:57:03.380072   29447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:57:03.397931   29447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:57:03.412383   29447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:57:03.567900   29447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:57:03.721366   29447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:57:03.737163   29447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:57:03.756494   29447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:57:03.756570   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.767799   29447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:57:03.767866   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.778739   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.789495   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.800585   29447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:57:03.813221   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.824053   29447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.835220   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.845426   29447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:57:03.854894   29447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:57:03.864074   29447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:57:04.020012   29447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:57:04.256195   29447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:57:04.256262   29447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:57:04.261541   29447 start.go:563] Will wait 60s for crictl version
	I1007 10:57:04.261605   29447 ssh_runner.go:195] Run: which crictl
	I1007 10:57:04.266424   29447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:57:04.306687   29447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:57:04.306770   29447 ssh_runner.go:195] Run: crio --version
	I1007 10:57:04.342644   29447 ssh_runner.go:195] Run: crio --version
	I1007 10:57:04.376624   29447 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:57:04.378190   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:57:04.381211   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:04.381557   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:04.381578   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:04.381799   29447 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:57:04.386556   29447 kubeadm.go:883] updating cluster {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:57:04.386679   29447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:57:04.386728   29447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:57:04.431534   29447 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:57:04.431564   29447 crio.go:433] Images already preloaded, skipping extraction
	I1007 10:57:04.431618   29447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:57:04.471722   29447 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:57:04.471751   29447 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:57:04.471764   29447 kubeadm.go:934] updating node { 192.168.39.250 8443 v1.31.1 crio true true} ...
	I1007 10:57:04.471889   29447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:57:04.471959   29447 ssh_runner.go:195] Run: crio config
	I1007 10:57:04.525534   29447 cni.go:84] Creating CNI manager for ""
	I1007 10:57:04.525555   29447 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 10:57:04.525564   29447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:57:04.525581   29447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406505 NodeName:ha-406505 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:57:04.525698   29447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406505"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:57:04.525716   29447 kube-vip.go:115] generating kube-vip config ...
	I1007 10:57:04.525751   29447 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:57:04.537676   29447 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:57:04.537777   29447 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:57:04.537841   29447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:57:04.547556   29447 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:57:04.547619   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 10:57:04.557240   29447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 10:57:04.575646   29447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:57:04.593225   29447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 10:57:04.611864   29447 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:57:04.630249   29447 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:57:04.634460   29447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:57:04.779449   29447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:57:04.794566   29447 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.250
	I1007 10:57:04.794589   29447 certs.go:194] generating shared ca certs ...
	I1007 10:57:04.794603   29447 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:04.794760   29447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:57:04.794902   29447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:57:04.794927   29447 certs.go:256] generating profile certs ...
	I1007 10:57:04.795030   29447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:57:04.795066   29447 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376
	I1007 10:57:04.795083   29447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.102 192.168.39.254]
	I1007 10:57:05.108330   29447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 ...
	I1007 10:57:05.108361   29447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376: {Name:mk04adcfb95e9408df73c49cc28f69521efd4eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:05.108524   29447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376 ...
	I1007 10:57:05.108541   29447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376: {Name:mk08d01b1655950dbc2445f79f2d8bdc29563add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:05.108614   29447 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:57:05.108753   29447 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:57:05.108875   29447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:57:05.108890   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:57:05.108904   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:57:05.108914   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:57:05.108926   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:57:05.108938   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:57:05.108949   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:57:05.108961   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:57:05.108973   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:57:05.109020   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:57:05.109055   29447 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:57:05.109066   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:57:05.109091   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:57:05.109135   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:57:05.109164   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:57:05.109202   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:57:05.109238   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:57:05.109251   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:57:05.109262   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:05.109871   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:57:05.360442   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:57:05.605815   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:57:05.850088   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:57:06.219588   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 10:57:06.276707   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:57:06.318692   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:57:06.348933   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:57:06.385454   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:57:06.415472   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:57:06.447267   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:57:06.504935   29447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:57:06.532354   29447 ssh_runner.go:195] Run: openssl version
	I1007 10:57:06.539545   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:57:06.554465   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.560708   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.560773   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.569485   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:57:06.586629   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:57:06.600762   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.608271   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.608356   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.616754   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:57:06.632118   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:57:06.646429   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.655247   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.655315   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.661893   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:57:06.674956   29447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:57:06.682421   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 10:57:06.688720   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 10:57:06.695527   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 10:57:06.702575   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 10:57:06.709386   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 10:57:06.715690   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 10:57:06.724003   29447 kubeadm.go:392] StartCluster: {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:57:06.724168   29447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:57:06.724228   29447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:57:06.792220   29447 cri.go:89] found id: "a64bf2e21e156733427ea0d3a45ec9f23d99632adb4fc9587bd263896cb45c81"
	I1007 10:57:06.792250   29447 cri.go:89] found id: "606ec7a724513e10c9da9a27b0b650b8c529f2df4f1079e79bcb30d4c7839fcf"
	I1007 10:57:06.792256   29447 cri.go:89] found id: "d6f4624f73f68c6b59c63a1c3a5b28b4d748f196ec2bac402e5462f97addeae5"
	I1007 10:57:06.792261   29447 cri.go:89] found id: "630e5de32b697cc2301625c159c7ec527a1d4c719a4018553d5edb345a23ca79"
	I1007 10:57:06.792265   29447 cri.go:89] found id: "54438f91675378609a3f994ca735839da4a4bdd24c088cd3a42b45cdf6008d74"
	I1007 10:57:06.792270   29447 cri.go:89] found id: "815c284d9f8c834cea5412ecc0f136a8219af90faff522693c81431cfcbb170e"
	I1007 10:57:06.792273   29447 cri.go:89] found id: "048e86e40dd08c62b9fed5f84a6d7c6ba376d8e40348f0a461ee4b5ed1eb0c1e"
	I1007 10:57:06.792284   29447 cri.go:89] found id: "55130afb3140b78545837a44e0d1200ed084970a981975f2439a746c1aee5ecd"
	I1007 10:57:06.792289   29447 cri.go:89] found id: "1799fca1e0776626eea0f6a1d7d4e5470021a7a26022e13fbb3dd3fd3a4dff19"
	I1007 10:57:06.792295   29447 cri.go:89] found id: "809bd2a742c43a680efa79ca906fec95b70290a0d3fe3628198ee66abc1da27b"
	I1007 10:57:06.792299   29447 cri.go:89] found id: "46ee0ba8c50585b784c79a0db0e2996a651504cb4a60879c5e7db44d64cd22c6"
	I1007 10:57:06.792303   29447 cri.go:89] found id: "b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12"
	I1007 10:57:06.792308   29447 cri.go:89] found id: "0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136"
	I1007 10:57:06.792312   29447 cri.go:89] found id: "4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec"
	I1007 10:57:06.792320   29447 cri.go:89] found id: "99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff"
	I1007 10:57:06.792324   29447 cri.go:89] found id: "fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887"
	I1007 10:57:06.792328   29447 cri.go:89] found id: "11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b"
	I1007 10:57:06.792333   29447 cri.go:89] found id: "eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750"
	I1007 10:57:06.792337   29447 cri.go:89] found id: ""
	I1007 10:57:06.792388   29447 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406505 -n ha-406505
helpers_test.go:261: (dbg) Run:  kubectl --context ha-406505 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-55rw7 etcd-ha-406505-m03 kube-apiserver-ha-406505-m03 kube-controller-manager-ha-406505-m03 kube-scheduler-ha-406505-m03 kube-vip-ha-406505-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-406505 describe pod busybox-7dff88458-55rw7 etcd-ha-406505-m03 kube-apiserver-ha-406505-m03 kube-controller-manager-ha-406505-m03 kube-scheduler-ha-406505-m03 kube-vip-ha-406505-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-406505 describe pod busybox-7dff88458-55rw7 etcd-ha-406505-m03 kube-apiserver-ha-406505-m03 kube-controller-manager-ha-406505-m03 kube-scheduler-ha-406505-m03 kube-vip-ha-406505-m03: exit status 1 (87.427786ms)

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-55rw7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dr6c9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-dr6c9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  16s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  14s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  15s (x2 over 17s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "etcd-ha-406505-m03" not found
	Error from server (NotFound): pods "kube-apiserver-ha-406505-m03" not found
	Error from server (NotFound): pods "kube-controller-manager-ha-406505-m03" not found
	Error from server (NotFound): pods "kube-scheduler-ha-406505-m03" not found
	Error from server (NotFound): pods "kube-vip-ha-406505-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-406505 describe pod busybox-7dff88458-55rw7 etcd-ha-406505-m03 kube-apiserver-ha-406505-m03 kube-controller-manager-ha-406505-m03 kube-scheduler-ha-406505-m03 kube-vip-ha-406505-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (6.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (175.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-406505 stop -v=7 --alsologtostderr: exit status 82 (2m1.606093373s)

                                                
                                                
-- stdout --
	* Stopping node "ha-406505-m04"  ...
	* Stopping node "ha-406505-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 11:06:49.295964   32639 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:06:49.296131   32639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:06:49.296141   32639 out.go:358] Setting ErrFile to fd 2...
	I1007 11:06:49.296146   32639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:06:49.296340   32639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 11:06:49.296589   32639 out.go:352] Setting JSON to false
	I1007 11:06:49.296675   32639 mustload.go:65] Loading cluster: ha-406505
	I1007 11:06:49.297058   32639 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:06:49.297142   32639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 11:06:49.297353   32639 mustload.go:65] Loading cluster: ha-406505
	I1007 11:06:49.297532   32639 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:06:49.297576   32639 stop.go:39] StopHost: ha-406505-m04
	I1007 11:06:49.298043   32639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:06:49.298093   32639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:06:49.316921   32639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I1007 11:06:49.317476   32639 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:06:49.318095   32639 main.go:141] libmachine: Using API Version  1
	I1007 11:06:49.318119   32639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:06:49.318540   32639 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:06:49.320995   32639 out.go:177] * Stopping node "ha-406505-m04"  ...
	I1007 11:06:49.322485   32639 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 11:06:49.322528   32639 main.go:141] libmachine: (ha-406505-m04) Calling .DriverName
	I1007 11:06:49.322853   32639 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 11:06:49.322897   32639 main.go:141] libmachine: (ha-406505-m04) Calling .GetSSHHostname
	I1007 11:06:49.324688   32639 retry.go:31] will retry after 259.725291ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I1007 11:06:49.585139   32639 main.go:141] libmachine: (ha-406505-m04) Calling .GetSSHHostname
	I1007 11:06:49.586828   32639 retry.go:31] will retry after 217.400467ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I1007 11:06:49.805170   32639 main.go:141] libmachine: (ha-406505-m04) Calling .GetSSHHostname
	I1007 11:06:49.806684   32639 retry.go:31] will retry after 621.671693ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I1007 11:06:50.429426   32639 main.go:141] libmachine: (ha-406505-m04) Calling .GetSSHHostname
	W1007 11:06:50.431080   32639 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I1007 11:06:50.431135   32639 main.go:141] libmachine: Stopping "ha-406505-m04"...
	I1007 11:06:50.431147   32639 main.go:141] libmachine: (ha-406505-m04) Calling .GetState
	I1007 11:06:50.432303   32639 stop.go:66] stop err: Machine "ha-406505-m04" is already stopped.
	I1007 11:06:50.432347   32639 stop.go:69] host is already stopped
	I1007 11:06:50.432359   32639 stop.go:39] StopHost: ha-406505-m02
	I1007 11:06:50.432639   32639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:06:50.432680   32639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:06:50.447739   32639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I1007 11:06:50.448167   32639 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:06:50.448613   32639 main.go:141] libmachine: Using API Version  1
	I1007 11:06:50.448639   32639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:06:50.448937   32639 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:06:50.451060   32639 out.go:177] * Stopping node "ha-406505-m02"  ...
	I1007 11:06:50.452290   32639 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 11:06:50.452320   32639 main.go:141] libmachine: (ha-406505-m02) Calling .DriverName
	I1007 11:06:50.452558   32639 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 11:06:50.452579   32639 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHHostname
	I1007 11:06:50.455646   32639 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 11:06:50.456178   32639 main.go:141] libmachine: (ha-406505-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:d0:65", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:57:18 +0000 UTC Type:0 Mac:52:54:00:c4:d0:65 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-406505-m02 Clientid:01:52:54:00:c4:d0:65}
	I1007 11:06:50.456205   32639 main.go:141] libmachine: (ha-406505-m02) DBG | domain ha-406505-m02 has defined IP address 192.168.39.37 and MAC address 52:54:00:c4:d0:65 in network mk-ha-406505
	I1007 11:06:50.456359   32639 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHPort
	I1007 11:06:50.456533   32639 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHKeyPath
	I1007 11:06:50.456700   32639 main.go:141] libmachine: (ha-406505-m02) Calling .GetSSHUsername
	I1007 11:06:50.456840   32639 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505-m02/id_rsa Username:docker}
	I1007 11:06:50.539718   32639 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 11:06:50.595508   32639 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 11:06:50.652459   32639 main.go:141] libmachine: Stopping "ha-406505-m02"...
	I1007 11:06:50.652485   32639 main.go:141] libmachine: (ha-406505-m02) Calling .GetState
	I1007 11:06:50.654126   32639 main.go:141] libmachine: (ha-406505-m02) Calling .Stop
	I1007 11:06:50.657460   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 0/120
	I1007 11:06:51.658713   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 1/120
	I1007 11:06:52.660120   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 2/120
	I1007 11:06:53.662564   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 3/120
	I1007 11:06:54.663914   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 4/120
	I1007 11:06:55.666089   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 5/120
	I1007 11:06:56.667472   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 6/120
	I1007 11:06:57.668966   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 7/120
	I1007 11:06:58.670418   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 8/120
	I1007 11:06:59.671840   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 9/120
	I1007 11:07:00.673893   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 10/120
	I1007 11:07:01.675330   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 11/120
	I1007 11:07:02.676876   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 12/120
	I1007 11:07:03.678336   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 13/120
	I1007 11:07:04.680603   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 14/120
	I1007 11:07:05.681938   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 15/120
	I1007 11:07:06.683281   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 16/120
	I1007 11:07:07.684934   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 17/120
	I1007 11:07:08.686422   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 18/120
	I1007 11:07:09.687897   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 19/120
	I1007 11:07:10.689394   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 20/120
	I1007 11:07:11.690641   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 21/120
	I1007 11:07:12.692036   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 22/120
	I1007 11:07:13.693420   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 23/120
	I1007 11:07:14.694761   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 24/120
	I1007 11:07:15.697297   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 25/120
	I1007 11:07:16.698839   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 26/120
	I1007 11:07:17.700538   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 27/120
	I1007 11:07:18.702444   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 28/120
	I1007 11:07:19.704051   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 29/120
	I1007 11:07:20.705818   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 30/120
	I1007 11:07:21.708120   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 31/120
	I1007 11:07:22.709601   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 32/120
	I1007 11:07:23.711171   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 33/120
	I1007 11:07:24.712637   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 34/120
	I1007 11:07:25.714843   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 35/120
	I1007 11:07:26.716075   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 36/120
	I1007 11:07:27.717527   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 37/120
	I1007 11:07:28.718884   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 38/120
	I1007 11:07:29.721360   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 39/120
	I1007 11:07:30.723427   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 40/120
	I1007 11:07:31.724835   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 41/120
	I1007 11:07:32.726160   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 42/120
	I1007 11:07:33.727698   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 43/120
	I1007 11:07:34.729179   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 44/120
	I1007 11:07:35.731088   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 45/120
	I1007 11:07:36.732509   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 46/120
	I1007 11:07:37.733881   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 47/120
	I1007 11:07:38.735166   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 48/120
	I1007 11:07:39.736461   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 49/120
	I1007 11:07:40.737848   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 50/120
	I1007 11:07:41.739659   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 51/120
	I1007 11:07:42.740946   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 52/120
	I1007 11:07:43.742959   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 53/120
	I1007 11:07:44.744276   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 54/120
	I1007 11:07:45.746171   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 55/120
	I1007 11:07:46.747423   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 56/120
	I1007 11:07:47.748709   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 57/120
	I1007 11:07:48.750011   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 58/120
	I1007 11:07:49.751910   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 59/120
	I1007 11:07:50.753536   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 60/120
	I1007 11:07:51.754989   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 61/120
	I1007 11:07:52.756286   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 62/120
	I1007 11:07:53.757967   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 63/120
	I1007 11:07:54.759306   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 64/120
	I1007 11:07:55.761334   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 65/120
	I1007 11:07:56.763267   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 66/120
	I1007 11:07:57.764791   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 67/120
	I1007 11:07:58.766047   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 68/120
	I1007 11:07:59.767409   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 69/120
	I1007 11:08:00.769246   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 70/120
	I1007 11:08:01.770631   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 71/120
	I1007 11:08:02.772047   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 72/120
	I1007 11:08:03.773351   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 73/120
	I1007 11:08:04.774722   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 74/120
	I1007 11:08:05.776589   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 75/120
	I1007 11:08:06.778274   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 76/120
	I1007 11:08:07.779739   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 77/120
	I1007 11:08:08.781129   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 78/120
	I1007 11:08:09.782466   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 79/120
	I1007 11:08:10.784289   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 80/120
	I1007 11:08:11.786608   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 81/120
	I1007 11:08:12.788032   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 82/120
	I1007 11:08:13.789572   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 83/120
	I1007 11:08:14.790893   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 84/120
	I1007 11:08:15.792857   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 85/120
	I1007 11:08:16.794314   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 86/120
	I1007 11:08:17.796982   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 87/120
	I1007 11:08:18.798497   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 88/120
	I1007 11:08:19.799993   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 89/120
	I1007 11:08:20.802063   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 90/120
	I1007 11:08:21.803380   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 91/120
	I1007 11:08:22.804884   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 92/120
	I1007 11:08:23.806419   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 93/120
	I1007 11:08:24.808032   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 94/120
	I1007 11:08:25.809477   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 95/120
	I1007 11:08:26.810917   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 96/120
	I1007 11:08:27.812552   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 97/120
	I1007 11:08:28.814495   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 98/120
	I1007 11:08:29.815958   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 99/120
	I1007 11:08:30.818205   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 100/120
	I1007 11:08:31.819341   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 101/120
	I1007 11:08:32.820805   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 102/120
	I1007 11:08:33.822082   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 103/120
	I1007 11:08:34.823336   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 104/120
	I1007 11:08:35.824643   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 105/120
	I1007 11:08:36.826440   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 106/120
	I1007 11:08:37.828147   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 107/120
	I1007 11:08:38.829624   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 108/120
	I1007 11:08:39.830928   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 109/120
	I1007 11:08:40.832491   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 110/120
	I1007 11:08:41.834405   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 111/120
	I1007 11:08:42.835781   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 112/120
	I1007 11:08:43.837124   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 113/120
	I1007 11:08:44.838426   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 114/120
	I1007 11:08:45.840109   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 115/120
	I1007 11:08:46.841557   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 116/120
	I1007 11:08:47.843065   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 117/120
	I1007 11:08:48.844483   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 118/120
	I1007 11:08:49.846110   32639 main.go:141] libmachine: (ha-406505-m02) Waiting for machine to stop 119/120
	I1007 11:08:50.847090   32639 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1007 11:08:50.847150   32639 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1007 11:08:50.849316   32639 out.go:201] 
	W1007 11:08:50.850611   32639 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1007 11:08:50.850627   32639 out.go:270] * 
	* 
	W1007 11:08:50.852878   32639 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 11:08:50.855328   32639 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-406505 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr: (33.601751429s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-406505 -n ha-406505
E1007 11:09:36.387739   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-406505 -n ha-406505: exit status 2 (15.616243083s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-406505 logs -n 25: (4.231999572s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m04 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp testdata/cp-test.txt                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505:/home/docker/cp-test_ha-406505-m04_ha-406505.txt                       |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505 sudo cat                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505.txt                                 |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m02:/home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m02 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m03:/home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n                                                                 | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | ha-406505-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-406505 ssh -n ha-406505-m03 sudo cat                                          | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC | 07 Oct 24 10:50 UTC |
	|         | /home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-406505 node stop m02 -v=7                                                     | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-406505 node start m02 -v=7                                                    | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-406505 -v=7                                                           | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-406505 -v=7                                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-406505 --wait=true -v=7                                                    | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 10:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-406505                                                                | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 11:06 UTC |                     |
	| node    | ha-406505 node delete m03 -v=7                                                   | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 11:06 UTC | 07 Oct 24 11:06 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-406505 stop -v=7                                                              | ha-406505 | jenkins | v1.34.0 | 07 Oct 24 11:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:55:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:55:30.794033   29447 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:55:30.794343   29447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:55:30.794353   29447 out.go:358] Setting ErrFile to fd 2...
	I1007 10:55:30.794358   29447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:55:30.794636   29447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:55:30.795389   29447 out.go:352] Setting JSON to false
	I1007 10:55:30.796394   29447 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2225,"bootTime":1728296306,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:55:30.796491   29447 start.go:139] virtualization: kvm guest
	I1007 10:55:30.799104   29447 out.go:177] * [ha-406505] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:55:30.800626   29447 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:55:30.800640   29447 notify.go:220] Checking for updates...
	I1007 10:55:30.803410   29447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:55:30.804914   29447 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:55:30.806210   29447 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:55:30.807469   29447 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:55:30.808873   29447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:55:30.810633   29447 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:55:30.810741   29447 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:55:30.811301   29447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:55:30.811382   29447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:55:30.827997   29447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I1007 10:55:30.828419   29447 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:55:30.828927   29447 main.go:141] libmachine: Using API Version  1
	I1007 10:55:30.828950   29447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:55:30.829275   29447 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:55:30.829462   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.866638   29447 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 10:55:30.867832   29447 start.go:297] selected driver: kvm2
	I1007 10:55:30.867847   29447 start.go:901] validating driver "kvm2" against &{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:55:30.867993   29447 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:55:30.868324   29447 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:55:30.868393   29447 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:55:30.883607   29447 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:55:30.884398   29447 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:55:30.884430   29447 cni.go:84] Creating CNI manager for ""
	I1007 10:55:30.884477   29447 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 10:55:30.884532   29447 start.go:340] cluster config:
	{Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:55:30.884667   29447 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:55:30.887625   29447 out.go:177] * Starting "ha-406505" primary control-plane node in "ha-406505" cluster
	I1007 10:55:30.889130   29447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:55:30.889173   29447 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 10:55:30.889180   29447 cache.go:56] Caching tarball of preloaded images
	I1007 10:55:30.889265   29447 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 10:55:30.889276   29447 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:55:30.889406   29447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/config.json ...
	I1007 10:55:30.889609   29447 start.go:360] acquireMachinesLock for ha-406505: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 10:55:30.889652   29447 start.go:364] duration metric: took 24.494µs to acquireMachinesLock for "ha-406505"
	I1007 10:55:30.889665   29447 start.go:96] Skipping create...Using existing machine configuration
	I1007 10:55:30.889672   29447 fix.go:54] fixHost starting: 
	I1007 10:55:30.889919   29447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:55:30.889956   29447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:55:30.905409   29447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I1007 10:55:30.905796   29447 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:55:30.906241   29447 main.go:141] libmachine: Using API Version  1
	I1007 10:55:30.906267   29447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:55:30.906599   29447 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:55:30.906789   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.906907   29447 main.go:141] libmachine: (ha-406505) Calling .GetState
	I1007 10:55:30.908591   29447 fix.go:112] recreateIfNeeded on ha-406505: state=Running err=<nil>
	W1007 10:55:30.908611   29447 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 10:55:30.911510   29447 out.go:177] * Updating the running kvm2 "ha-406505" VM ...
	I1007 10:55:30.912725   29447 machine.go:93] provisionDockerMachine start ...
	I1007 10:55:30.912748   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:55:30.913010   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:30.915628   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:30.916120   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:30.916146   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:30.916330   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:30.916511   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:30.916680   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:30.916822   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:30.916955   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:30.917153   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:30.917166   29447 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 10:55:31.033780   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:55:31.033807   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.034055   29447 buildroot.go:166] provisioning hostname "ha-406505"
	I1007 10:55:31.034084   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.034284   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.036957   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.037413   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.037434   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.037635   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.037817   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.037986   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.038124   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.038289   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.038459   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.038471   29447 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-406505 && echo "ha-406505" | sudo tee /etc/hostname
	I1007 10:55:31.163165   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-406505
	
	I1007 10:55:31.163191   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.165768   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.166076   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.166103   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.166240   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.166482   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.166659   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.166867   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.167037   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.167200   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.167215   29447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-406505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-406505/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-406505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:55:31.281078   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:55:31.281115   29447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 10:55:31.281162   29447 buildroot.go:174] setting up certificates
	I1007 10:55:31.281174   29447 provision.go:84] configureAuth start
	I1007 10:55:31.281188   29447 main.go:141] libmachine: (ha-406505) Calling .GetMachineName
	I1007 10:55:31.281444   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:55:31.283970   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.284388   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.284407   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.284595   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.287215   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.287589   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.287607   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.287775   29447 provision.go:143] copyHostCerts
	I1007 10:55:31.287819   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:55:31.287852   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 10:55:31.287869   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 10:55:31.287940   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 10:55:31.288067   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:55:31.288094   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 10:55:31.288104   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 10:55:31.288150   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 10:55:31.288213   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:55:31.288231   29447 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 10:55:31.288238   29447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 10:55:31.288273   29447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 10:55:31.288330   29447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.ha-406505 san=[127.0.0.1 192.168.39.250 ha-406505 localhost minikube]
	I1007 10:55:31.355824   29447 provision.go:177] copyRemoteCerts
	I1007 10:55:31.355877   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:55:31.355903   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.358704   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.359013   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.359045   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.359197   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.359373   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.359532   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.359697   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:55:31.447226   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 10:55:31.447288   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 10:55:31.474841   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 10:55:31.474941   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 10:55:31.503482   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 10:55:31.503562   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:55:31.530808   29447 provision.go:87] duration metric: took 249.62125ms to configureAuth
	I1007 10:55:31.530835   29447 buildroot.go:189] setting minikube options for container-runtime
	I1007 10:55:31.531044   29447 config.go:182] Loaded profile config "ha-406505": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:55:31.531130   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:55:31.534412   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.534867   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:55:31.534899   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:55:31.535087   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:55:31.535266   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.535472   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:55:31.535637   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:55:31.535791   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:55:31.535959   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:55:31.536003   29447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:57:02.380736   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:57:02.380814   29447 machine.go:96] duration metric: took 1m31.468035985s to provisionDockerMachine
	I1007 10:57:02.380830   29447 start.go:293] postStartSetup for "ha-406505" (driver="kvm2")
	I1007 10:57:02.380850   29447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:57:02.380876   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.381188   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:57:02.381220   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.384384   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.384896   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.384926   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.385018   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.385183   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.385347   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.385473   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.471888   29447 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:57:02.476934   29447 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 10:57:02.476965   29447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 10:57:02.477032   29447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 10:57:02.477129   29447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 10:57:02.477144   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 10:57:02.477256   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 10:57:02.487344   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:57:02.513177   29447 start.go:296] duration metric: took 132.325528ms for postStartSetup
	I1007 10:57:02.513227   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.513496   29447 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 10:57:02.513521   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.516263   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.516783   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.516813   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.516980   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.517176   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.517396   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.517564   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	W1007 10:57:02.602805   29447 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1007 10:57:02.602831   29447 fix.go:56] duration metric: took 1m31.713158307s for fixHost
	I1007 10:57:02.602856   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.605787   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.606125   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.606153   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.606373   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.606599   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.606770   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.606900   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.607063   29447 main.go:141] libmachine: Using SSH client type: native
	I1007 10:57:02.607214   29447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I1007 10:57:02.607225   29447 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 10:57:02.716959   29447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728298622.682212120
	
	I1007 10:57:02.716987   29447 fix.go:216] guest clock: 1728298622.682212120
	I1007 10:57:02.716996   29447 fix.go:229] Guest: 2024-10-07 10:57:02.68221212 +0000 UTC Remote: 2024-10-07 10:57:02.602839413 +0000 UTC m=+91.848037136 (delta=79.372707ms)
	I1007 10:57:02.717030   29447 fix.go:200] guest clock delta is within tolerance: 79.372707ms
	I1007 10:57:02.717039   29447 start.go:83] releasing machines lock for "ha-406505", held for 1m31.827376309s
	I1007 10:57:02.717068   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.717326   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:57:02.719717   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.720045   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.720070   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.720179   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720690   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720867   29447 main.go:141] libmachine: (ha-406505) Calling .DriverName
	I1007 10:57:02.720951   29447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:57:02.721002   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.721045   29447 ssh_runner.go:195] Run: cat /version.json
	I1007 10:57:02.721066   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHHostname
	I1007 10:57:02.723380   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723574   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723766   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.723798   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.723929   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.724086   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:02.724104   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:02.724106   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.724245   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.724286   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHPort
	I1007 10:57:02.724375   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHKeyPath
	I1007 10:57:02.724386   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.724493   29447 main.go:141] libmachine: (ha-406505) Calling .GetSSHUsername
	I1007 10:57:02.724605   29447 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/ha-406505/id_rsa Username:docker}
	I1007 10:57:02.828482   29447 ssh_runner.go:195] Run: systemctl --version
	I1007 10:57:02.834933   29447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:57:02.995415   29447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 10:57:03.004313   29447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 10:57:03.004375   29447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:57:03.014071   29447 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 10:57:03.014098   29447 start.go:495] detecting cgroup driver to use...
	I1007 10:57:03.014160   29447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:57:03.031548   29447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:57:03.045665   29447 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:57:03.045720   29447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:57:03.060885   29447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:57:03.075305   29447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:57:03.229941   29447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:57:03.380003   29447 docker.go:233] disabling docker service ...
	I1007 10:57:03.380072   29447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:57:03.397931   29447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:57:03.412383   29447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:57:03.567900   29447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:57:03.721366   29447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:57:03.737163   29447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:57:03.756494   29447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:57:03.756570   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.767799   29447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:57:03.767866   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.778739   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.789495   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.800585   29447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:57:03.813221   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.824053   29447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.835220   29447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:57:03.845426   29447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:57:03.854894   29447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:57:03.864074   29447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:57:04.020012   29447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:57:04.256195   29447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:57:04.256262   29447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:57:04.261541   29447 start.go:563] Will wait 60s for crictl version
	I1007 10:57:04.261605   29447 ssh_runner.go:195] Run: which crictl
	I1007 10:57:04.266424   29447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:57:04.306687   29447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 10:57:04.306770   29447 ssh_runner.go:195] Run: crio --version
	I1007 10:57:04.342644   29447 ssh_runner.go:195] Run: crio --version
	I1007 10:57:04.376624   29447 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 10:57:04.378190   29447 main.go:141] libmachine: (ha-406505) Calling .GetIP
	I1007 10:57:04.381211   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:04.381557   29447 main.go:141] libmachine: (ha-406505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:e2:d7", ip: ""} in network mk-ha-406505: {Iface:virbr1 ExpiryTime:2024-10-07 11:46:15 +0000 UTC Type:0 Mac:52:54:00:1d:e2:d7 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-406505 Clientid:01:52:54:00:1d:e2:d7}
	I1007 10:57:04.381578   29447 main.go:141] libmachine: (ha-406505) DBG | domain ha-406505 has defined IP address 192.168.39.250 and MAC address 52:54:00:1d:e2:d7 in network mk-ha-406505
	I1007 10:57:04.381799   29447 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 10:57:04.386556   29447 kubeadm.go:883] updating cluster {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:57:04.386679   29447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:57:04.386728   29447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:57:04.431534   29447 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:57:04.431564   29447 crio.go:433] Images already preloaded, skipping extraction
	I1007 10:57:04.431618   29447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:57:04.471722   29447 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:57:04.471751   29447 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:57:04.471764   29447 kubeadm.go:934] updating node { 192.168.39.250 8443 v1.31.1 crio true true} ...
	I1007 10:57:04.471889   29447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-406505 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:57:04.471959   29447 ssh_runner.go:195] Run: crio config
	I1007 10:57:04.525534   29447 cni.go:84] Creating CNI manager for ""
	I1007 10:57:04.525555   29447 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 10:57:04.525564   29447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:57:04.525581   29447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-406505 NodeName:ha-406505 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:57:04.525698   29447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-406505"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:57:04.525716   29447 kube-vip.go:115] generating kube-vip config ...
	I1007 10:57:04.525751   29447 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 10:57:04.537676   29447 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 10:57:04.537777   29447 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 10:57:04.537841   29447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:57:04.547556   29447 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:57:04.547619   29447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 10:57:04.557240   29447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 10:57:04.575646   29447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:57:04.593225   29447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 10:57:04.611864   29447 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 10:57:04.630249   29447 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 10:57:04.634460   29447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:57:04.779449   29447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:57:04.794566   29447 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505 for IP: 192.168.39.250
	I1007 10:57:04.794589   29447 certs.go:194] generating shared ca certs ...
	I1007 10:57:04.794603   29447 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:04.794760   29447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 10:57:04.794902   29447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 10:57:04.794927   29447 certs.go:256] generating profile certs ...
	I1007 10:57:04.795030   29447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/client.key
	I1007 10:57:04.795066   29447 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376
	I1007 10:57:04.795083   29447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.250 192.168.39.37 192.168.39.102 192.168.39.254]
	I1007 10:57:05.108330   29447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 ...
	I1007 10:57:05.108361   29447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376: {Name:mk04adcfb95e9408df73c49cc28f69521efd4eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:05.108524   29447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376 ...
	I1007 10:57:05.108541   29447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376: {Name:mk08d01b1655950dbc2445f79f2d8bdc29563add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:57:05.108614   29447 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt.0791b376 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt
	I1007 10:57:05.108753   29447 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key.0791b376 -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key
	I1007 10:57:05.108875   29447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key
	I1007 10:57:05.108890   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 10:57:05.108904   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 10:57:05.108914   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 10:57:05.108926   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 10:57:05.108938   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 10:57:05.108949   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 10:57:05.108961   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 10:57:05.108973   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 10:57:05.109020   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 10:57:05.109055   29447 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 10:57:05.109066   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 10:57:05.109091   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 10:57:05.109135   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:57:05.109164   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 10:57:05.109202   29447 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 10:57:05.109238   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 10:57:05.109251   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 10:57:05.109262   29447 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:05.109871   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:57:05.360442   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 10:57:05.605815   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:57:05.850088   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 10:57:06.219588   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 10:57:06.276707   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 10:57:06.318692   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:57:06.348933   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/ha-406505/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:57:06.385454   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 10:57:06.415472   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 10:57:06.447267   29447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:57:06.504935   29447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:57:06.532354   29447 ssh_runner.go:195] Run: openssl version
	I1007 10:57:06.539545   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 10:57:06.554465   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.560708   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.560773   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 10:57:06.569485   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 10:57:06.586629   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 10:57:06.600762   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.608271   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.608356   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 10:57:06.616754   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 10:57:06.632118   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:57:06.646429   29447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.655247   29447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.655315   29447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:57:06.661893   29447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:57:06.674956   29447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:57:06.682421   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 10:57:06.688720   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 10:57:06.695527   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 10:57:06.702575   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 10:57:06.709386   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 10:57:06.715690   29447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 10:57:06.724003   29447 kubeadm.go:392] StartCluster: {Name:ha-406505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-406505 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.37 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.2 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:57:06.724168   29447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:57:06.724228   29447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:57:06.792220   29447 cri.go:89] found id: "a64bf2e21e156733427ea0d3a45ec9f23d99632adb4fc9587bd263896cb45c81"
	I1007 10:57:06.792250   29447 cri.go:89] found id: "606ec7a724513e10c9da9a27b0b650b8c529f2df4f1079e79bcb30d4c7839fcf"
	I1007 10:57:06.792256   29447 cri.go:89] found id: "d6f4624f73f68c6b59c63a1c3a5b28b4d748f196ec2bac402e5462f97addeae5"
	I1007 10:57:06.792261   29447 cri.go:89] found id: "630e5de32b697cc2301625c159c7ec527a1d4c719a4018553d5edb345a23ca79"
	I1007 10:57:06.792265   29447 cri.go:89] found id: "54438f91675378609a3f994ca735839da4a4bdd24c088cd3a42b45cdf6008d74"
	I1007 10:57:06.792270   29447 cri.go:89] found id: "815c284d9f8c834cea5412ecc0f136a8219af90faff522693c81431cfcbb170e"
	I1007 10:57:06.792273   29447 cri.go:89] found id: "048e86e40dd08c62b9fed5f84a6d7c6ba376d8e40348f0a461ee4b5ed1eb0c1e"
	I1007 10:57:06.792284   29447 cri.go:89] found id: "55130afb3140b78545837a44e0d1200ed084970a981975f2439a746c1aee5ecd"
	I1007 10:57:06.792289   29447 cri.go:89] found id: "1799fca1e0776626eea0f6a1d7d4e5470021a7a26022e13fbb3dd3fd3a4dff19"
	I1007 10:57:06.792295   29447 cri.go:89] found id: "809bd2a742c43a680efa79ca906fec95b70290a0d3fe3628198ee66abc1da27b"
	I1007 10:57:06.792299   29447 cri.go:89] found id: "46ee0ba8c50585b784c79a0db0e2996a651504cb4a60879c5e7db44d64cd22c6"
	I1007 10:57:06.792303   29447 cri.go:89] found id: "b0cc4a36e486c6a488e846bfbf03e43b7da70bb2bf99a153487b87f34d565f12"
	I1007 10:57:06.792308   29447 cri.go:89] found id: "0ebc4ee6afc90a7e8d867a5ba3221808427590e259e04a5ba21cc1196931b136"
	I1007 10:57:06.792312   29447 cri.go:89] found id: "4abb8ea9312274b1d39534e4b29dcfa3f2435a34e478f7748745c76ad1380dec"
	I1007 10:57:06.792320   29447 cri.go:89] found id: "99b7425285dcb9164d5c7fff76a667317585d30360dbfa7acfcbbb4564f111ff"
	I1007 10:57:06.792324   29447 cri.go:89] found id: "fa4965d1b169f33dee456a383fa75c0f93cca6bf8017be5bf1f0787d83919887"
	I1007 10:57:06.792328   29447 cri.go:89] found id: "11a16a81bf6bf12ab8aa881ebbb9fc65f881114f3caf29ac204033459e11987b"
	I1007 10:57:06.792333   29447 cri.go:89] found id: "eb0b61d1fd92070463676e33d6b2f91e643d202d53b9af7712c8459581d39750"
	I1007 10:57:06.792337   29447 cri.go:89] found id: ""
	I1007 10:57:06.792388   29447 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406505 -n ha-406505
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-406505 -n ha-406505: exit status 2 (229.619708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-406505" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (175.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (328.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-873106
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-873106
E1007 11:23:11.318138   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:24:36.387926   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-873106: exit status 82 (2m1.874450641s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-873106-m03"  ...
	* Stopping node "multinode-873106-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-873106" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-873106 --wait=true -v=8 --alsologtostderr
E1007 11:25:08.249908   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-873106 --wait=true -v=8 --alsologtostderr: (3m24.063246219s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-873106
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-873106 -n multinode-873106
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-873106 logs -n 25: (2.143186055s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp multinode-873106-m02:/home/docker/cp-test.txt                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2112677138/001/cp-test_multinode-873106-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp multinode-873106-m02:/home/docker/cp-test.txt                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106:/home/docker/cp-test_multinode-873106-m02_multinode-873106.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n multinode-873106 sudo cat                                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-873106-m02_multinode-873106.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp multinode-873106-m02:/home/docker/cp-test.txt                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m03:/home/docker/cp-test_multinode-873106-m02_multinode-873106-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n multinode-873106-m03 sudo cat                                   | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-873106-m02_multinode-873106-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp testdata/cp-test.txt                                                | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp multinode-873106-m03:/home/docker/cp-test.txt                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2112677138/001/cp-test_multinode-873106-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp multinode-873106-m03:/home/docker/cp-test.txt                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106:/home/docker/cp-test_multinode-873106-m03_multinode-873106.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n multinode-873106 sudo cat                                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-873106-m03_multinode-873106.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp multinode-873106-m03:/home/docker/cp-test.txt                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m02:/home/docker/cp-test_multinode-873106-m03_multinode-873106-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n multinode-873106-m02 sudo cat                                   | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-873106-m03_multinode-873106-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-873106 node stop m03                                                          | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	| node    | multinode-873106 node start                                                             | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-873106                                                                | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC |                     |
	| stop    | -p multinode-873106                                                                     | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC |                     |
	| start   | -p multinode-873106                                                                     | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:24 UTC | 07 Oct 24 11:28 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-873106                                                                | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:28 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:24:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:24:55.493543   42947 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:24:55.493677   42947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:24:55.493688   42947 out.go:358] Setting ErrFile to fd 2...
	I1007 11:24:55.493699   42947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:24:55.493895   42947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 11:24:55.494430   42947 out.go:352] Setting JSON to false
	I1007 11:24:55.495309   42947 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3989,"bootTime":1728296306,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:24:55.495404   42947 start.go:139] virtualization: kvm guest
	I1007 11:24:55.497892   42947 out.go:177] * [multinode-873106] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:24:55.499575   42947 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 11:24:55.499580   42947 notify.go:220] Checking for updates...
	I1007 11:24:55.501078   42947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:24:55.502344   42947 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 11:24:55.503601   42947 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:24:55.504886   42947 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:24:55.506306   42947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:24:55.507904   42947 config.go:182] Loaded profile config "multinode-873106": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:24:55.508002   42947 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:24:55.508463   42947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:24:55.508527   42947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:24:55.525841   42947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I1007 11:24:55.526370   42947 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:24:55.526999   42947 main.go:141] libmachine: Using API Version  1
	I1007 11:24:55.527024   42947 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:24:55.527413   42947 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:24:55.527594   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:24:55.563347   42947 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 11:24:55.564602   42947 start.go:297] selected driver: kvm2
	I1007 11:24:55.564624   42947 start.go:901] validating driver "kvm2" against &{Name:multinode-873106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:multinode-873106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:24:55.564780   42947 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:24:55.565140   42947 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:24:55.565218   42947 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 11:24:55.580983   42947 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 11:24:55.581645   42947 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:24:55.581677   42947 cni.go:84] Creating CNI manager for ""
	I1007 11:24:55.581727   42947 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 11:24:55.581782   42947 start.go:340] cluster config:
	{Name:multinode-873106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-873106 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflo
w:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:24:55.581913   42947 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:24:55.583781   42947 out.go:177] * Starting "multinode-873106" primary control-plane node in "multinode-873106" cluster
	I1007 11:24:55.584979   42947 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:24:55.585014   42947 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 11:24:55.585022   42947 cache.go:56] Caching tarball of preloaded images
	I1007 11:24:55.585144   42947 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 11:24:55.585159   42947 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:24:55.585302   42947 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/config.json ...
	I1007 11:24:55.585544   42947 start.go:360] acquireMachinesLock for multinode-873106: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 11:24:55.585622   42947 start.go:364] duration metric: took 56.743µs to acquireMachinesLock for "multinode-873106"
	I1007 11:24:55.585641   42947 start.go:96] Skipping create...Using existing machine configuration
	I1007 11:24:55.585650   42947 fix.go:54] fixHost starting: 
	I1007 11:24:55.585948   42947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:24:55.585988   42947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:24:55.600773   42947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I1007 11:24:55.601123   42947 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:24:55.601595   42947 main.go:141] libmachine: Using API Version  1
	I1007 11:24:55.601620   42947 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:24:55.601943   42947 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:24:55.602128   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:24:55.602319   42947 main.go:141] libmachine: (multinode-873106) Calling .GetState
	I1007 11:24:55.604037   42947 fix.go:112] recreateIfNeeded on multinode-873106: state=Running err=<nil>
	W1007 11:24:55.604060   42947 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 11:24:55.605894   42947 out.go:177] * Updating the running kvm2 "multinode-873106" VM ...
	I1007 11:24:55.607122   42947 machine.go:93] provisionDockerMachine start ...
	I1007 11:24:55.607150   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:24:55.607346   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:24:55.609950   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.610379   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:55.610447   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.610518   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:24:55.610675   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:55.610792   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:55.610955   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:24:55.611133   42947 main.go:141] libmachine: Using SSH client type: native
	I1007 11:24:55.611349   42947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1007 11:24:55.611363   42947 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 11:24:55.725494   42947 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-873106
	
	I1007 11:24:55.725528   42947 main.go:141] libmachine: (multinode-873106) Calling .GetMachineName
	I1007 11:24:55.725777   42947 buildroot.go:166] provisioning hostname "multinode-873106"
	I1007 11:24:55.725800   42947 main.go:141] libmachine: (multinode-873106) Calling .GetMachineName
	I1007 11:24:55.726003   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:24:55.728777   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.729154   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:55.729174   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.729340   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:24:55.729525   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:55.729689   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:55.729825   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:24:55.729962   42947 main.go:141] libmachine: Using SSH client type: native
	I1007 11:24:55.730162   42947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1007 11:24:55.730186   42947 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-873106 && echo "multinode-873106" | sudo tee /etc/hostname
	I1007 11:24:55.857862   42947 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-873106
	
	I1007 11:24:55.857897   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:24:55.860664   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.861135   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:55.861166   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.861438   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:24:55.861644   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:55.861811   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:55.861932   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:24:55.862060   42947 main.go:141] libmachine: Using SSH client type: native
	I1007 11:24:55.862231   42947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1007 11:24:55.862247   42947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-873106' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-873106/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-873106' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:24:55.969357   42947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:24:55.969381   42947 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 11:24:55.969425   42947 buildroot.go:174] setting up certificates
	I1007 11:24:55.969445   42947 provision.go:84] configureAuth start
	I1007 11:24:55.969458   42947 main.go:141] libmachine: (multinode-873106) Calling .GetMachineName
	I1007 11:24:55.969722   42947 main.go:141] libmachine: (multinode-873106) Calling .GetIP
	I1007 11:24:55.972760   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.973125   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:55.973153   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.973290   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:24:55.975459   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.975788   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:55.975824   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.975937   42947 provision.go:143] copyHostCerts
	I1007 11:24:55.975968   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 11:24:55.976033   42947 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 11:24:55.976053   42947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 11:24:55.976122   42947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 11:24:55.976215   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 11:24:55.976233   42947 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 11:24:55.976237   42947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 11:24:55.976263   42947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 11:24:55.976320   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 11:24:55.976339   42947 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 11:24:55.976348   42947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 11:24:55.976370   42947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 11:24:55.976428   42947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.multinode-873106 san=[127.0.0.1 192.168.39.51 localhost minikube multinode-873106]
	I1007 11:24:56.115595   42947 provision.go:177] copyRemoteCerts
	I1007 11:24:56.115648   42947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:24:56.115669   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:24:56.118168   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:56.118490   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:56.118511   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:56.118677   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:24:56.118876   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:56.119043   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:24:56.119174   42947 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/multinode-873106/id_rsa Username:docker}
	I1007 11:24:56.205793   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 11:24:56.205861   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1007 11:24:56.234096   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 11:24:56.234164   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 11:24:56.267817   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 11:24:56.267897   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 11:24:56.295900   42947 provision.go:87] duration metric: took 326.442396ms to configureAuth
	I1007 11:24:56.295924   42947 buildroot.go:189] setting minikube options for container-runtime
	I1007 11:24:56.296149   42947 config.go:182] Loaded profile config "multinode-873106": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:24:56.296221   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:24:56.298827   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:56.299187   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:56.299216   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:56.299357   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:24:56.299583   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:56.299716   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:56.299877   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:24:56.300048   42947 main.go:141] libmachine: Using SSH client type: native
	I1007 11:24:56.300252   42947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1007 11:24:56.300268   42947 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:26:27.127297   42947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:26:27.127331   42947 machine.go:96] duration metric: took 1m31.520188095s to provisionDockerMachine
	I1007 11:26:27.127345   42947 start.go:293] postStartSetup for "multinode-873106" (driver="kvm2")
	I1007 11:26:27.127359   42947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:26:27.127379   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:26:27.127712   42947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:26:27.127744   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:26:27.131016   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.131435   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:26:27.131457   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.131588   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:26:27.131773   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:26:27.131906   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:26:27.132086   42947 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/multinode-873106/id_rsa Username:docker}
	I1007 11:26:27.216235   42947 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:26:27.220605   42947 command_runner.go:130] > NAME=Buildroot
	I1007 11:26:27.220629   42947 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1007 11:26:27.220635   42947 command_runner.go:130] > ID=buildroot
	I1007 11:26:27.220642   42947 command_runner.go:130] > VERSION_ID=2023.02.9
	I1007 11:26:27.220649   42947 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1007 11:26:27.220681   42947 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 11:26:27.220699   42947 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 11:26:27.220780   42947 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 11:26:27.220892   42947 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 11:26:27.220906   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 11:26:27.221027   42947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 11:26:27.231135   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 11:26:27.255762   42947 start.go:296] duration metric: took 128.405146ms for postStartSetup
	I1007 11:26:27.255823   42947 fix.go:56] duration metric: took 1m31.670173136s for fixHost
	I1007 11:26:27.255846   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:26:27.258263   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.258541   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:26:27.258565   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.258699   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:26:27.258867   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:26:27.259001   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:26:27.259111   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:26:27.259224   42947 main.go:141] libmachine: Using SSH client type: native
	I1007 11:26:27.259459   42947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1007 11:26:27.259472   42947 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 11:26:27.364830   42947 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728300387.322133753
	
	I1007 11:26:27.364850   42947 fix.go:216] guest clock: 1728300387.322133753
	I1007 11:26:27.364859   42947 fix.go:229] Guest: 2024-10-07 11:26:27.322133753 +0000 UTC Remote: 2024-10-07 11:26:27.255828163 +0000 UTC m=+91.800329531 (delta=66.30559ms)
	I1007 11:26:27.364885   42947 fix.go:200] guest clock delta is within tolerance: 66.30559ms
	I1007 11:26:27.364892   42947 start.go:83] releasing machines lock for "multinode-873106", held for 1m31.779257624s
	I1007 11:26:27.364915   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:26:27.365176   42947 main.go:141] libmachine: (multinode-873106) Calling .GetIP
	I1007 11:26:27.367610   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.368026   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:26:27.368058   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.368143   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:26:27.368627   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:26:27.368772   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:26:27.368845   42947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:26:27.368892   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:26:27.368986   42947 ssh_runner.go:195] Run: cat /version.json
	I1007 11:26:27.369015   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:26:27.371692   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.371866   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.372147   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:26:27.372178   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.372245   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:26:27.372281   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.372365   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:26:27.372484   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:26:27.372610   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:26:27.372687   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:26:27.372716   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:26:27.372793   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:26:27.372854   42947 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/multinode-873106/id_rsa Username:docker}
	I1007 11:26:27.372892   42947 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/multinode-873106/id_rsa Username:docker}
	I1007 11:26:27.449818   42947 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I1007 11:26:27.449979   42947 ssh_runner.go:195] Run: systemctl --version
	I1007 11:26:27.483080   42947 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1007 11:26:27.483143   42947 command_runner.go:130] > systemd 252 (252)
	I1007 11:26:27.483164   42947 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1007 11:26:27.483210   42947 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:26:27.644338   42947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 11:26:27.653180   42947 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1007 11:26:27.653218   42947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 11:26:27.653287   42947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:26:27.663150   42947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 11:26:27.663178   42947 start.go:495] detecting cgroup driver to use...
	I1007 11:26:27.663268   42947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:26:27.680472   42947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:26:27.696605   42947 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:26:27.696657   42947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:26:27.710624   42947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:26:27.724536   42947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:26:27.869033   42947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:26:28.012003   42947 docker.go:233] disabling docker service ...
	I1007 11:26:28.012072   42947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:26:28.029003   42947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:26:28.043285   42947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:26:28.189555   42947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:26:28.330397   42947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:26:28.344884   42947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:26:28.364754   42947 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1007 11:26:28.364801   42947 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:26:28.364854   42947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.375834   42947 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:26:28.375904   42947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.386429   42947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.396921   42947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.407390   42947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:26:28.418328   42947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.428902   42947 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.440494   42947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.451107   42947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:26:28.460691   42947 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1007 11:26:28.460788   42947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:26:28.470935   42947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:26:28.616051   42947 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:26:28.821053   42947 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:26:28.821130   42947 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:26:28.826296   42947 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1007 11:26:28.826318   42947 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1007 11:26:28.826331   42947 command_runner.go:130] > Device: 0,22	Inode: 1327        Links: 1
	I1007 11:26:28.826340   42947 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1007 11:26:28.826347   42947 command_runner.go:130] > Access: 2024-10-07 11:26:28.723434115 +0000
	I1007 11:26:28.826354   42947 command_runner.go:130] > Modify: 2024-10-07 11:26:28.668432078 +0000
	I1007 11:26:28.826361   42947 command_runner.go:130] > Change: 2024-10-07 11:26:28.668432078 +0000
	I1007 11:26:28.826367   42947 command_runner.go:130] >  Birth: -
	I1007 11:26:28.827044   42947 start.go:563] Will wait 60s for crictl version
	I1007 11:26:28.827113   42947 ssh_runner.go:195] Run: which crictl
	I1007 11:26:28.831302   42947 command_runner.go:130] > /usr/bin/crictl
	I1007 11:26:28.831369   42947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:26:28.867565   42947 command_runner.go:130] > Version:  0.1.0
	I1007 11:26:28.867586   42947 command_runner.go:130] > RuntimeName:  cri-o
	I1007 11:26:28.867591   42947 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1007 11:26:28.867596   42947 command_runner.go:130] > RuntimeApiVersion:  v1
	I1007 11:26:28.868824   42947 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 11:26:28.868885   42947 ssh_runner.go:195] Run: crio --version
	I1007 11:26:28.897709   42947 command_runner.go:130] > crio version 1.29.1
	I1007 11:26:28.897733   42947 command_runner.go:130] > Version:        1.29.1
	I1007 11:26:28.897742   42947 command_runner.go:130] > GitCommit:      unknown
	I1007 11:26:28.897748   42947 command_runner.go:130] > GitCommitDate:  unknown
	I1007 11:26:28.897754   42947 command_runner.go:130] > GitTreeState:   clean
	I1007 11:26:28.897763   42947 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1007 11:26:28.897771   42947 command_runner.go:130] > GoVersion:      go1.21.6
	I1007 11:26:28.897777   42947 command_runner.go:130] > Compiler:       gc
	I1007 11:26:28.897784   42947 command_runner.go:130] > Platform:       linux/amd64
	I1007 11:26:28.897791   42947 command_runner.go:130] > Linkmode:       dynamic
	I1007 11:26:28.897797   42947 command_runner.go:130] > BuildTags:      
	I1007 11:26:28.897804   42947 command_runner.go:130] >   containers_image_ostree_stub
	I1007 11:26:28.897812   42947 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1007 11:26:28.897818   42947 command_runner.go:130] >   btrfs_noversion
	I1007 11:26:28.897834   42947 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1007 11:26:28.897840   42947 command_runner.go:130] >   libdm_no_deferred_remove
	I1007 11:26:28.897846   42947 command_runner.go:130] >   seccomp
	I1007 11:26:28.897865   42947 command_runner.go:130] > LDFlags:          unknown
	I1007 11:26:28.897875   42947 command_runner.go:130] > SeccompEnabled:   true
	I1007 11:26:28.897882   42947 command_runner.go:130] > AppArmorEnabled:  false
	I1007 11:26:28.899185   42947 ssh_runner.go:195] Run: crio --version
	I1007 11:26:28.928555   42947 command_runner.go:130] > crio version 1.29.1
	I1007 11:26:28.928577   42947 command_runner.go:130] > Version:        1.29.1
	I1007 11:26:28.928583   42947 command_runner.go:130] > GitCommit:      unknown
	I1007 11:26:28.928587   42947 command_runner.go:130] > GitCommitDate:  unknown
	I1007 11:26:28.928591   42947 command_runner.go:130] > GitTreeState:   clean
	I1007 11:26:28.928597   42947 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1007 11:26:28.928600   42947 command_runner.go:130] > GoVersion:      go1.21.6
	I1007 11:26:28.928604   42947 command_runner.go:130] > Compiler:       gc
	I1007 11:26:28.928608   42947 command_runner.go:130] > Platform:       linux/amd64
	I1007 11:26:28.928612   42947 command_runner.go:130] > Linkmode:       dynamic
	I1007 11:26:28.928953   42947 command_runner.go:130] > BuildTags:      
	I1007 11:26:28.928977   42947 command_runner.go:130] >   containers_image_ostree_stub
	I1007 11:26:28.928987   42947 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1007 11:26:28.928994   42947 command_runner.go:130] >   btrfs_noversion
	I1007 11:26:28.929003   42947 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1007 11:26:28.929017   42947 command_runner.go:130] >   libdm_no_deferred_remove
	I1007 11:26:28.929789   42947 command_runner.go:130] >   seccomp
	I1007 11:26:28.929808   42947 command_runner.go:130] > LDFlags:          unknown
	I1007 11:26:28.929815   42947 command_runner.go:130] > SeccompEnabled:   true
	I1007 11:26:28.929821   42947 command_runner.go:130] > AppArmorEnabled:  false
	I1007 11:26:28.932740   42947 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 11:26:28.934121   42947 main.go:141] libmachine: (multinode-873106) Calling .GetIP
	I1007 11:26:28.936494   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:28.936803   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:26:28.936839   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:28.937035   42947 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 11:26:28.941456   42947 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1007 11:26:28.941527   42947 kubeadm.go:883] updating cluster {Name:multinode-873106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-873106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadg
et:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:26:28.941646   42947 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:26:28.941683   42947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:26:28.990132   42947 command_runner.go:130] > {
	I1007 11:26:28.990157   42947 command_runner.go:130] >   "images": [
	I1007 11:26:28.990161   42947 command_runner.go:130] >     {
	I1007 11:26:28.990169   42947 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1007 11:26:28.990175   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990180   42947 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1007 11:26:28.990185   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990189   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990198   42947 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1007 11:26:28.990205   42947 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1007 11:26:28.990209   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990213   42947 command_runner.go:130] >       "size": "87190579",
	I1007 11:26:28.990217   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:28.990221   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990225   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990230   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990233   42947 command_runner.go:130] >     },
	I1007 11:26:28.990236   42947 command_runner.go:130] >     {
	I1007 11:26:28.990242   42947 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1007 11:26:28.990248   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990254   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1007 11:26:28.990258   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990267   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990277   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1007 11:26:28.990291   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1007 11:26:28.990297   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990301   42947 command_runner.go:130] >       "size": "1363676",
	I1007 11:26:28.990304   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:28.990312   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990316   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990323   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990326   42947 command_runner.go:130] >     },
	I1007 11:26:28.990330   42947 command_runner.go:130] >     {
	I1007 11:26:28.990338   42947 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1007 11:26:28.990342   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990347   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1007 11:26:28.990353   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990356   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990363   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1007 11:26:28.990373   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1007 11:26:28.990377   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990383   42947 command_runner.go:130] >       "size": "31470524",
	I1007 11:26:28.990387   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:28.990393   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990396   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990400   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990405   42947 command_runner.go:130] >     },
	I1007 11:26:28.990408   42947 command_runner.go:130] >     {
	I1007 11:26:28.990414   42947 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1007 11:26:28.990420   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990425   42947 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1007 11:26:28.990430   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990434   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990440   42947 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1007 11:26:28.990452   42947 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1007 11:26:28.990463   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990469   42947 command_runner.go:130] >       "size": "63273227",
	I1007 11:26:28.990473   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:28.990477   42947 command_runner.go:130] >       "username": "nonroot",
	I1007 11:26:28.990481   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990485   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990488   42947 command_runner.go:130] >     },
	I1007 11:26:28.990491   42947 command_runner.go:130] >     {
	I1007 11:26:28.990497   42947 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1007 11:26:28.990503   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990508   42947 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1007 11:26:28.990513   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990517   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990523   42947 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1007 11:26:28.990530   42947 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1007 11:26:28.990537   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990542   42947 command_runner.go:130] >       "size": "149009664",
	I1007 11:26:28.990545   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:28.990549   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:28.990553   42947 command_runner.go:130] >       },
	I1007 11:26:28.990557   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990560   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990564   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990567   42947 command_runner.go:130] >     },
	I1007 11:26:28.990571   42947 command_runner.go:130] >     {
	I1007 11:26:28.990577   42947 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1007 11:26:28.990581   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990586   42947 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1007 11:26:28.990592   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990595   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990603   42947 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1007 11:26:28.990612   42947 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1007 11:26:28.990615   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990624   42947 command_runner.go:130] >       "size": "95237600",
	I1007 11:26:28.990630   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:28.990634   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:28.990637   42947 command_runner.go:130] >       },
	I1007 11:26:28.990641   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990647   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990651   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990654   42947 command_runner.go:130] >     },
	I1007 11:26:28.990657   42947 command_runner.go:130] >     {
	I1007 11:26:28.990663   42947 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1007 11:26:28.990670   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990675   42947 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1007 11:26:28.990681   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990685   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990692   42947 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1007 11:26:28.990702   42947 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1007 11:26:28.990706   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990710   42947 command_runner.go:130] >       "size": "89437508",
	I1007 11:26:28.990713   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:28.990717   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:28.990720   42947 command_runner.go:130] >       },
	I1007 11:26:28.990724   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990728   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990732   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990735   42947 command_runner.go:130] >     },
	I1007 11:26:28.990738   42947 command_runner.go:130] >     {
	I1007 11:26:28.990744   42947 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1007 11:26:28.990750   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990754   42947 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1007 11:26:28.990758   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990761   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990773   42947 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1007 11:26:28.990782   42947 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1007 11:26:28.990786   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990790   42947 command_runner.go:130] >       "size": "92733849",
	I1007 11:26:28.990796   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:28.990800   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990806   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990809   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990813   42947 command_runner.go:130] >     },
	I1007 11:26:28.990816   42947 command_runner.go:130] >     {
	I1007 11:26:28.990821   42947 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1007 11:26:28.990825   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990829   42947 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1007 11:26:28.990832   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990836   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990842   42947 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1007 11:26:28.990849   42947 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1007 11:26:28.990852   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990856   42947 command_runner.go:130] >       "size": "68420934",
	I1007 11:26:28.990860   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:28.990863   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:28.990866   42947 command_runner.go:130] >       },
	I1007 11:26:28.990870   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990873   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990877   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990880   42947 command_runner.go:130] >     },
	I1007 11:26:28.990883   42947 command_runner.go:130] >     {
	I1007 11:26:28.990888   42947 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1007 11:26:28.990892   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990895   42947 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1007 11:26:28.990899   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990902   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990909   42947 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1007 11:26:28.990915   42947 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1007 11:26:28.990918   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990922   42947 command_runner.go:130] >       "size": "742080",
	I1007 11:26:28.990925   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:28.990929   42947 command_runner.go:130] >         "value": "65535"
	I1007 11:26:28.990933   42947 command_runner.go:130] >       },
	I1007 11:26:28.990937   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990941   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990945   42947 command_runner.go:130] >       "pinned": true
	I1007 11:26:28.990948   42947 command_runner.go:130] >     }
	I1007 11:26:28.990951   42947 command_runner.go:130] >   ]
	I1007 11:26:28.990954   42947 command_runner.go:130] > }
	I1007 11:26:28.991119   42947 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:26:28.991132   42947 crio.go:433] Images already preloaded, skipping extraction
	I1007 11:26:28.991172   42947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:26:29.027241   42947 command_runner.go:130] > {
	I1007 11:26:29.027267   42947 command_runner.go:130] >   "images": [
	I1007 11:26:29.027271   42947 command_runner.go:130] >     {
	I1007 11:26:29.027280   42947 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1007 11:26:29.027298   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027305   42947 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1007 11:26:29.027308   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027312   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027322   42947 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1007 11:26:29.027329   42947 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1007 11:26:29.027332   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027336   42947 command_runner.go:130] >       "size": "87190579",
	I1007 11:26:29.027341   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:29.027345   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027355   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027360   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027363   42947 command_runner.go:130] >     },
	I1007 11:26:29.027367   42947 command_runner.go:130] >     {
	I1007 11:26:29.027382   42947 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1007 11:26:29.027389   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027394   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1007 11:26:29.027397   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027401   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027408   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1007 11:26:29.027417   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1007 11:26:29.027421   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027425   42947 command_runner.go:130] >       "size": "1363676",
	I1007 11:26:29.027429   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:29.027436   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027442   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027447   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027450   42947 command_runner.go:130] >     },
	I1007 11:26:29.027453   42947 command_runner.go:130] >     {
	I1007 11:26:29.027461   42947 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1007 11:26:29.027465   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027470   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1007 11:26:29.027475   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027478   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027486   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1007 11:26:29.027495   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1007 11:26:29.027499   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027503   42947 command_runner.go:130] >       "size": "31470524",
	I1007 11:26:29.027507   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:29.027511   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027521   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027525   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027528   42947 command_runner.go:130] >     },
	I1007 11:26:29.027531   42947 command_runner.go:130] >     {
	I1007 11:26:29.027537   42947 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1007 11:26:29.027544   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027548   42947 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1007 11:26:29.027559   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027565   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027573   42947 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1007 11:26:29.027587   42947 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1007 11:26:29.027592   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027596   42947 command_runner.go:130] >       "size": "63273227",
	I1007 11:26:29.027602   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:29.027606   42947 command_runner.go:130] >       "username": "nonroot",
	I1007 11:26:29.027614   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027618   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027622   42947 command_runner.go:130] >     },
	I1007 11:26:29.027625   42947 command_runner.go:130] >     {
	I1007 11:26:29.027631   42947 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1007 11:26:29.027637   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027642   42947 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1007 11:26:29.027646   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027650   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027658   42947 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1007 11:26:29.027667   42947 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1007 11:26:29.027671   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027675   42947 command_runner.go:130] >       "size": "149009664",
	I1007 11:26:29.027678   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:29.027682   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:29.027685   42947 command_runner.go:130] >       },
	I1007 11:26:29.027689   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027693   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027697   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027701   42947 command_runner.go:130] >     },
	I1007 11:26:29.027704   42947 command_runner.go:130] >     {
	I1007 11:26:29.027709   42947 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1007 11:26:29.027720   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027724   42947 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1007 11:26:29.027730   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027739   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027748   42947 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1007 11:26:29.027755   42947 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1007 11:26:29.027760   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027764   42947 command_runner.go:130] >       "size": "95237600",
	I1007 11:26:29.027767   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:29.027771   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:29.027777   42947 command_runner.go:130] >       },
	I1007 11:26:29.027781   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027785   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027788   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027792   42947 command_runner.go:130] >     },
	I1007 11:26:29.027795   42947 command_runner.go:130] >     {
	I1007 11:26:29.027800   42947 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1007 11:26:29.027807   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027811   42947 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1007 11:26:29.027815   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027819   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027826   42947 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1007 11:26:29.027836   42947 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1007 11:26:29.027841   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027847   42947 command_runner.go:130] >       "size": "89437508",
	I1007 11:26:29.027851   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:29.027854   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:29.027858   42947 command_runner.go:130] >       },
	I1007 11:26:29.027862   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027866   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027869   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027872   42947 command_runner.go:130] >     },
	I1007 11:26:29.027876   42947 command_runner.go:130] >     {
	I1007 11:26:29.027881   42947 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1007 11:26:29.027887   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027892   42947 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1007 11:26:29.027900   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027906   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027924   42947 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1007 11:26:29.027933   42947 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1007 11:26:29.027936   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027940   42947 command_runner.go:130] >       "size": "92733849",
	I1007 11:26:29.027943   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:29.027947   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027951   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027954   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027958   42947 command_runner.go:130] >     },
	I1007 11:26:29.027961   42947 command_runner.go:130] >     {
	I1007 11:26:29.027968   42947 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1007 11:26:29.027973   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027978   42947 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1007 11:26:29.028000   42947 command_runner.go:130] >       ],
	I1007 11:26:29.028004   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.028011   42947 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1007 11:26:29.028018   42947 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1007 11:26:29.028022   42947 command_runner.go:130] >       ],
	I1007 11:26:29.028026   42947 command_runner.go:130] >       "size": "68420934",
	I1007 11:26:29.028029   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:29.028033   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:29.028037   42947 command_runner.go:130] >       },
	I1007 11:26:29.028040   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.028044   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.028048   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.028051   42947 command_runner.go:130] >     },
	I1007 11:26:29.028054   42947 command_runner.go:130] >     {
	I1007 11:26:29.028060   42947 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1007 11:26:29.028066   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.028070   42947 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1007 11:26:29.028073   42947 command_runner.go:130] >       ],
	I1007 11:26:29.028083   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.028092   42947 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1007 11:26:29.028101   42947 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1007 11:26:29.028106   42947 command_runner.go:130] >       ],
	I1007 11:26:29.028110   42947 command_runner.go:130] >       "size": "742080",
	I1007 11:26:29.028113   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:29.028117   42947 command_runner.go:130] >         "value": "65535"
	I1007 11:26:29.028120   42947 command_runner.go:130] >       },
	I1007 11:26:29.028124   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.028128   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.028132   42947 command_runner.go:130] >       "pinned": true
	I1007 11:26:29.028135   42947 command_runner.go:130] >     }
	I1007 11:26:29.028138   42947 command_runner.go:130] >   ]
	I1007 11:26:29.028141   42947 command_runner.go:130] > }
	I1007 11:26:29.028808   42947 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:26:29.028822   42947 cache_images.go:84] Images are preloaded, skipping loading
	I1007 11:26:29.028830   42947 kubeadm.go:934] updating node { 192.168.39.51 8443 v1.31.1 crio true true} ...
	I1007 11:26:29.028930   42947 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-873106 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-873106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:26:29.028992   42947 ssh_runner.go:195] Run: crio config
	I1007 11:26:29.064745   42947 command_runner.go:130] ! time="2024-10-07 11:26:29.021941177Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1007 11:26:29.071512   42947 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1007 11:26:29.078279   42947 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1007 11:26:29.078305   42947 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1007 11:26:29.078316   42947 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1007 11:26:29.078321   42947 command_runner.go:130] > #
	I1007 11:26:29.078331   42947 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1007 11:26:29.078341   42947 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1007 11:26:29.078350   42947 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1007 11:26:29.078361   42947 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1007 11:26:29.078367   42947 command_runner.go:130] > # reload'.
	I1007 11:26:29.078375   42947 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1007 11:26:29.078383   42947 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1007 11:26:29.078395   42947 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1007 11:26:29.078405   42947 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1007 11:26:29.078418   42947 command_runner.go:130] > [crio]
	I1007 11:26:29.078429   42947 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1007 11:26:29.078434   42947 command_runner.go:130] > # containers images, in this directory.
	I1007 11:26:29.078439   42947 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1007 11:26:29.078448   42947 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1007 11:26:29.078455   42947 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1007 11:26:29.078462   42947 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1007 11:26:29.078467   42947 command_runner.go:130] > # imagestore = ""
	I1007 11:26:29.078474   42947 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1007 11:26:29.078479   42947 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1007 11:26:29.078484   42947 command_runner.go:130] > storage_driver = "overlay"
	I1007 11:26:29.078490   42947 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1007 11:26:29.078497   42947 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1007 11:26:29.078500   42947 command_runner.go:130] > storage_option = [
	I1007 11:26:29.078510   42947 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1007 11:26:29.078515   42947 command_runner.go:130] > ]
	I1007 11:26:29.078520   42947 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1007 11:26:29.078529   42947 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1007 11:26:29.078535   42947 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1007 11:26:29.078542   42947 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1007 11:26:29.078548   42947 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1007 11:26:29.078554   42947 command_runner.go:130] > # always happen on a node reboot
	I1007 11:26:29.078559   42947 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1007 11:26:29.078570   42947 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1007 11:26:29.078578   42947 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1007 11:26:29.078584   42947 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1007 11:26:29.078589   42947 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1007 11:26:29.078596   42947 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1007 11:26:29.078605   42947 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1007 11:26:29.078609   42947 command_runner.go:130] > # internal_wipe = true
	I1007 11:26:29.078626   42947 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1007 11:26:29.078637   42947 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1007 11:26:29.078641   42947 command_runner.go:130] > # internal_repair = false
	I1007 11:26:29.078646   42947 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1007 11:26:29.078655   42947 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1007 11:26:29.078660   42947 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1007 11:26:29.078667   42947 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1007 11:26:29.078676   42947 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1007 11:26:29.078681   42947 command_runner.go:130] > [crio.api]
	I1007 11:26:29.078687   42947 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1007 11:26:29.078694   42947 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1007 11:26:29.078699   42947 command_runner.go:130] > # IP address on which the stream server will listen.
	I1007 11:26:29.078706   42947 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1007 11:26:29.078712   42947 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1007 11:26:29.078719   42947 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1007 11:26:29.078723   42947 command_runner.go:130] > # stream_port = "0"
	I1007 11:26:29.078730   42947 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1007 11:26:29.078734   42947 command_runner.go:130] > # stream_enable_tls = false
	I1007 11:26:29.078742   42947 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1007 11:26:29.078745   42947 command_runner.go:130] > # stream_idle_timeout = ""
	I1007 11:26:29.078751   42947 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1007 11:26:29.078760   42947 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1007 11:26:29.078764   42947 command_runner.go:130] > # minutes.
	I1007 11:26:29.078770   42947 command_runner.go:130] > # stream_tls_cert = ""
	I1007 11:26:29.078776   42947 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1007 11:26:29.078782   42947 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1007 11:26:29.078787   42947 command_runner.go:130] > # stream_tls_key = ""
	I1007 11:26:29.078793   42947 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1007 11:26:29.078801   42947 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1007 11:26:29.078814   42947 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1007 11:26:29.078820   42947 command_runner.go:130] > # stream_tls_ca = ""
	I1007 11:26:29.078827   42947 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1007 11:26:29.078833   42947 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1007 11:26:29.078840   42947 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1007 11:26:29.078844   42947 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1007 11:26:29.078850   42947 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1007 11:26:29.078861   42947 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1007 11:26:29.078867   42947 command_runner.go:130] > [crio.runtime]
	I1007 11:26:29.078873   42947 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1007 11:26:29.078879   42947 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1007 11:26:29.078883   42947 command_runner.go:130] > # "nofile=1024:2048"
	I1007 11:26:29.078891   42947 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1007 11:26:29.078908   42947 command_runner.go:130] > # default_ulimits = [
	I1007 11:26:29.078916   42947 command_runner.go:130] > # ]
	I1007 11:26:29.078921   42947 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1007 11:26:29.078926   42947 command_runner.go:130] > # no_pivot = false
	I1007 11:26:29.078934   42947 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1007 11:26:29.078943   42947 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1007 11:26:29.078947   42947 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1007 11:26:29.078953   42947 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1007 11:26:29.078958   42947 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1007 11:26:29.078967   42947 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1007 11:26:29.078971   42947 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1007 11:26:29.078977   42947 command_runner.go:130] > # Cgroup setting for conmon
	I1007 11:26:29.078984   42947 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1007 11:26:29.078990   42947 command_runner.go:130] > conmon_cgroup = "pod"
	I1007 11:26:29.078997   42947 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1007 11:26:29.079003   42947 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1007 11:26:29.079009   42947 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1007 11:26:29.079015   42947 command_runner.go:130] > conmon_env = [
	I1007 11:26:29.079021   42947 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1007 11:26:29.079024   42947 command_runner.go:130] > ]
	I1007 11:26:29.079029   42947 command_runner.go:130] > # Additional environment variables to set for all the
	I1007 11:26:29.079036   42947 command_runner.go:130] > # containers. These are overridden if set in the
	I1007 11:26:29.079041   42947 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1007 11:26:29.079047   42947 command_runner.go:130] > # default_env = [
	I1007 11:26:29.079051   42947 command_runner.go:130] > # ]
	I1007 11:26:29.079058   42947 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1007 11:26:29.079066   42947 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1007 11:26:29.079071   42947 command_runner.go:130] > # selinux = false
	I1007 11:26:29.079077   42947 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1007 11:26:29.079085   42947 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1007 11:26:29.079093   42947 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1007 11:26:29.079097   42947 command_runner.go:130] > # seccomp_profile = ""
	I1007 11:26:29.079104   42947 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1007 11:26:29.079110   42947 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1007 11:26:29.079117   42947 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1007 11:26:29.079121   42947 command_runner.go:130] > # which might increase security.
	I1007 11:26:29.079128   42947 command_runner.go:130] > # This option is currently deprecated,
	I1007 11:26:29.079134   42947 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1007 11:26:29.079140   42947 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1007 11:26:29.079145   42947 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1007 11:26:29.079151   42947 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1007 11:26:29.079163   42947 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1007 11:26:29.079172   42947 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1007 11:26:29.079179   42947 command_runner.go:130] > # This option supports live configuration reload.
	I1007 11:26:29.079184   42947 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1007 11:26:29.079192   42947 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1007 11:26:29.079196   42947 command_runner.go:130] > # the cgroup blockio controller.
	I1007 11:26:29.079202   42947 command_runner.go:130] > # blockio_config_file = ""
	I1007 11:26:29.079209   42947 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1007 11:26:29.079215   42947 command_runner.go:130] > # blockio parameters.
	I1007 11:26:29.079219   42947 command_runner.go:130] > # blockio_reload = false
	I1007 11:26:29.079227   42947 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1007 11:26:29.079233   42947 command_runner.go:130] > # irqbalance daemon.
	I1007 11:26:29.079237   42947 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1007 11:26:29.079245   42947 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1007 11:26:29.079253   42947 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1007 11:26:29.079270   42947 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1007 11:26:29.079275   42947 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1007 11:26:29.079283   42947 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1007 11:26:29.079288   42947 command_runner.go:130] > # This option supports live configuration reload.
	I1007 11:26:29.079293   42947 command_runner.go:130] > # rdt_config_file = ""
	I1007 11:26:29.079299   42947 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1007 11:26:29.079305   42947 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1007 11:26:29.079324   42947 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1007 11:26:29.079330   42947 command_runner.go:130] > # separate_pull_cgroup = ""
	I1007 11:26:29.079339   42947 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1007 11:26:29.079347   42947 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1007 11:26:29.079353   42947 command_runner.go:130] > # will be added.
	I1007 11:26:29.079357   42947 command_runner.go:130] > # default_capabilities = [
	I1007 11:26:29.079363   42947 command_runner.go:130] > # 	"CHOWN",
	I1007 11:26:29.079367   42947 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1007 11:26:29.079371   42947 command_runner.go:130] > # 	"FSETID",
	I1007 11:26:29.079375   42947 command_runner.go:130] > # 	"FOWNER",
	I1007 11:26:29.079381   42947 command_runner.go:130] > # 	"SETGID",
	I1007 11:26:29.079384   42947 command_runner.go:130] > # 	"SETUID",
	I1007 11:26:29.079388   42947 command_runner.go:130] > # 	"SETPCAP",
	I1007 11:26:29.079392   42947 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1007 11:26:29.079398   42947 command_runner.go:130] > # 	"KILL",
	I1007 11:26:29.079402   42947 command_runner.go:130] > # ]
	I1007 11:26:29.079411   42947 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1007 11:26:29.079419   42947 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1007 11:26:29.079424   42947 command_runner.go:130] > # add_inheritable_capabilities = false
	I1007 11:26:29.079432   42947 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1007 11:26:29.079440   42947 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1007 11:26:29.079446   42947 command_runner.go:130] > default_sysctls = [
	I1007 11:26:29.079450   42947 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1007 11:26:29.079453   42947 command_runner.go:130] > ]
	I1007 11:26:29.079458   42947 command_runner.go:130] > # List of devices on the host that a
	I1007 11:26:29.079466   42947 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1007 11:26:29.079470   42947 command_runner.go:130] > # allowed_devices = [
	I1007 11:26:29.079476   42947 command_runner.go:130] > # 	"/dev/fuse",
	I1007 11:26:29.079479   42947 command_runner.go:130] > # ]
	I1007 11:26:29.079485   42947 command_runner.go:130] > # List of additional devices. specified as
	I1007 11:26:29.079492   42947 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1007 11:26:29.079499   42947 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1007 11:26:29.079505   42947 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1007 11:26:29.079515   42947 command_runner.go:130] > # additional_devices = [
	I1007 11:26:29.079519   42947 command_runner.go:130] > # ]
	I1007 11:26:29.079525   42947 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1007 11:26:29.079530   42947 command_runner.go:130] > # cdi_spec_dirs = [
	I1007 11:26:29.079534   42947 command_runner.go:130] > # 	"/etc/cdi",
	I1007 11:26:29.079538   42947 command_runner.go:130] > # 	"/var/run/cdi",
	I1007 11:26:29.079543   42947 command_runner.go:130] > # ]
	I1007 11:26:29.079549   42947 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1007 11:26:29.079556   42947 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1007 11:26:29.079563   42947 command_runner.go:130] > # Defaults to false.
	I1007 11:26:29.079568   42947 command_runner.go:130] > # device_ownership_from_security_context = false
	I1007 11:26:29.079576   42947 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1007 11:26:29.079583   42947 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1007 11:26:29.079590   42947 command_runner.go:130] > # hooks_dir = [
	I1007 11:26:29.079594   42947 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1007 11:26:29.079600   42947 command_runner.go:130] > # ]
	I1007 11:26:29.079606   42947 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1007 11:26:29.079615   42947 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1007 11:26:29.079621   42947 command_runner.go:130] > # its default mounts from the following two files:
	I1007 11:26:29.079624   42947 command_runner.go:130] > #
	I1007 11:26:29.079630   42947 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1007 11:26:29.079638   42947 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1007 11:26:29.079645   42947 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1007 11:26:29.079648   42947 command_runner.go:130] > #
	I1007 11:26:29.079654   42947 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1007 11:26:29.079662   42947 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1007 11:26:29.079669   42947 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1007 11:26:29.079678   42947 command_runner.go:130] > #      only add mounts it finds in this file.
	I1007 11:26:29.079683   42947 command_runner.go:130] > #
	I1007 11:26:29.079687   42947 command_runner.go:130] > # default_mounts_file = ""
	I1007 11:26:29.079694   42947 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1007 11:26:29.079700   42947 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1007 11:26:29.079706   42947 command_runner.go:130] > pids_limit = 1024
	I1007 11:26:29.079713   42947 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1007 11:26:29.079720   42947 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1007 11:26:29.079728   42947 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1007 11:26:29.079739   42947 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1007 11:26:29.079745   42947 command_runner.go:130] > # log_size_max = -1
	I1007 11:26:29.079752   42947 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1007 11:26:29.079758   42947 command_runner.go:130] > # log_to_journald = false
	I1007 11:26:29.079764   42947 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1007 11:26:29.079771   42947 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1007 11:26:29.079776   42947 command_runner.go:130] > # Path to directory for container attach sockets.
	I1007 11:26:29.079784   42947 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1007 11:26:29.079789   42947 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1007 11:26:29.079795   42947 command_runner.go:130] > # bind_mount_prefix = ""
	I1007 11:26:29.079800   42947 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1007 11:26:29.079806   42947 command_runner.go:130] > # read_only = false
	I1007 11:26:29.079813   42947 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1007 11:26:29.079821   42947 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1007 11:26:29.079825   42947 command_runner.go:130] > # live configuration reload.
	I1007 11:26:29.079831   42947 command_runner.go:130] > # log_level = "info"
	I1007 11:26:29.079836   42947 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1007 11:26:29.079843   42947 command_runner.go:130] > # This option supports live configuration reload.
	I1007 11:26:29.079847   42947 command_runner.go:130] > # log_filter = ""
	I1007 11:26:29.079852   42947 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1007 11:26:29.079861   42947 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1007 11:26:29.079865   42947 command_runner.go:130] > # separated by comma.
	I1007 11:26:29.079874   42947 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 11:26:29.079878   42947 command_runner.go:130] > # uid_mappings = ""
	I1007 11:26:29.079884   42947 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1007 11:26:29.079891   42947 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1007 11:26:29.079895   42947 command_runner.go:130] > # separated by comma.
	I1007 11:26:29.079904   42947 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 11:26:29.079911   42947 command_runner.go:130] > # gid_mappings = ""
	I1007 11:26:29.079919   42947 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1007 11:26:29.079925   42947 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1007 11:26:29.079933   42947 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1007 11:26:29.079942   42947 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 11:26:29.079948   42947 command_runner.go:130] > # minimum_mappable_uid = -1
	I1007 11:26:29.079954   42947 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1007 11:26:29.079962   42947 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1007 11:26:29.079968   42947 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1007 11:26:29.079978   42947 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 11:26:29.080001   42947 command_runner.go:130] > # minimum_mappable_gid = -1
	I1007 11:26:29.080010   42947 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1007 11:26:29.080022   42947 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1007 11:26:29.080029   42947 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1007 11:26:29.080034   42947 command_runner.go:130] > # ctr_stop_timeout = 30
	I1007 11:26:29.080039   42947 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1007 11:26:29.080047   42947 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1007 11:26:29.080053   42947 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1007 11:26:29.080060   42947 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1007 11:26:29.080064   42947 command_runner.go:130] > drop_infra_ctr = false
	I1007 11:26:29.080072   42947 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1007 11:26:29.080079   42947 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1007 11:26:29.080086   42947 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1007 11:26:29.080093   42947 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1007 11:26:29.080099   42947 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1007 11:26:29.080107   42947 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1007 11:26:29.080112   42947 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1007 11:26:29.080119   42947 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1007 11:26:29.080123   42947 command_runner.go:130] > # shared_cpuset = ""
	I1007 11:26:29.080131   42947 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1007 11:26:29.080136   42947 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1007 11:26:29.080140   42947 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1007 11:26:29.080147   42947 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1007 11:26:29.080153   42947 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1007 11:26:29.080158   42947 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1007 11:26:29.080169   42947 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1007 11:26:29.080175   42947 command_runner.go:130] > # enable_criu_support = false
	I1007 11:26:29.080181   42947 command_runner.go:130] > # Enable/disable the generation of the container,
	I1007 11:26:29.080189   42947 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1007 11:26:29.080196   42947 command_runner.go:130] > # enable_pod_events = false
	I1007 11:26:29.080201   42947 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1007 11:26:29.080209   42947 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1007 11:26:29.080216   42947 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1007 11:26:29.080220   42947 command_runner.go:130] > # default_runtime = "runc"
	I1007 11:26:29.080227   42947 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1007 11:26:29.080234   42947 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1007 11:26:29.080244   42947 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1007 11:26:29.080251   42947 command_runner.go:130] > # creation as a file is not desired either.
	I1007 11:26:29.080259   42947 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1007 11:26:29.080266   42947 command_runner.go:130] > # the hostname is being managed dynamically.
	I1007 11:26:29.080271   42947 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1007 11:26:29.080277   42947 command_runner.go:130] > # ]
	I1007 11:26:29.080282   42947 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1007 11:26:29.080290   42947 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1007 11:26:29.080298   42947 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1007 11:26:29.080303   42947 command_runner.go:130] > # Each entry in the table should follow the format:
	I1007 11:26:29.080309   42947 command_runner.go:130] > #
	I1007 11:26:29.080314   42947 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1007 11:26:29.080320   42947 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1007 11:26:29.080342   42947 command_runner.go:130] > # runtime_type = "oci"
	I1007 11:26:29.080348   42947 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1007 11:26:29.080353   42947 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1007 11:26:29.080360   42947 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1007 11:26:29.080364   42947 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1007 11:26:29.080369   42947 command_runner.go:130] > # monitor_env = []
	I1007 11:26:29.080374   42947 command_runner.go:130] > # privileged_without_host_devices = false
	I1007 11:26:29.080381   42947 command_runner.go:130] > # allowed_annotations = []
	I1007 11:26:29.080386   42947 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1007 11:26:29.080391   42947 command_runner.go:130] > # Where:
	I1007 11:26:29.080397   42947 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1007 11:26:29.080405   42947 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1007 11:26:29.080411   42947 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1007 11:26:29.080419   42947 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1007 11:26:29.080424   42947 command_runner.go:130] > #   in $PATH.
	I1007 11:26:29.080433   42947 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1007 11:26:29.080440   42947 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1007 11:26:29.080446   42947 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1007 11:26:29.080451   42947 command_runner.go:130] > #   state.
	I1007 11:26:29.080457   42947 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1007 11:26:29.080464   42947 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1007 11:26:29.080470   42947 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1007 11:26:29.080478   42947 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1007 11:26:29.080483   42947 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1007 11:26:29.080492   42947 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1007 11:26:29.080499   42947 command_runner.go:130] > #   The currently recognized values are:
	I1007 11:26:29.080508   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1007 11:26:29.080517   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1007 11:26:29.080525   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1007 11:26:29.080533   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1007 11:26:29.080542   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1007 11:26:29.080551   42947 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1007 11:26:29.080559   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1007 11:26:29.080567   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1007 11:26:29.080574   42947 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1007 11:26:29.080581   42947 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1007 11:26:29.080585   42947 command_runner.go:130] > #   deprecated option "conmon".
	I1007 11:26:29.080594   42947 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1007 11:26:29.080600   42947 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1007 11:26:29.080608   42947 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1007 11:26:29.080615   42947 command_runner.go:130] > #   should be moved to the container's cgroup
	I1007 11:26:29.080621   42947 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1007 11:26:29.080627   42947 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1007 11:26:29.080633   42947 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1007 11:26:29.080640   42947 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1007 11:26:29.080643   42947 command_runner.go:130] > #
	I1007 11:26:29.080648   42947 command_runner.go:130] > # Using the seccomp notifier feature:
	I1007 11:26:29.080655   42947 command_runner.go:130] > #
	I1007 11:26:29.080661   42947 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1007 11:26:29.080669   42947 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1007 11:26:29.080675   42947 command_runner.go:130] > #
	I1007 11:26:29.080681   42947 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1007 11:26:29.080688   42947 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1007 11:26:29.080691   42947 command_runner.go:130] > #
	I1007 11:26:29.080699   42947 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1007 11:26:29.080704   42947 command_runner.go:130] > # feature.
	I1007 11:26:29.080707   42947 command_runner.go:130] > #
	I1007 11:26:29.080716   42947 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1007 11:26:29.080724   42947 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1007 11:26:29.080730   42947 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1007 11:26:29.080738   42947 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1007 11:26:29.080746   42947 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1007 11:26:29.080749   42947 command_runner.go:130] > #
	I1007 11:26:29.080755   42947 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1007 11:26:29.080763   42947 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1007 11:26:29.080768   42947 command_runner.go:130] > #
	I1007 11:26:29.080773   42947 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1007 11:26:29.080781   42947 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1007 11:26:29.080784   42947 command_runner.go:130] > #
	I1007 11:26:29.080789   42947 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1007 11:26:29.080797   42947 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1007 11:26:29.080800   42947 command_runner.go:130] > # limitation.
	I1007 11:26:29.080809   42947 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1007 11:26:29.080813   42947 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1007 11:26:29.080817   42947 command_runner.go:130] > runtime_type = "oci"
	I1007 11:26:29.080821   42947 command_runner.go:130] > runtime_root = "/run/runc"
	I1007 11:26:29.080825   42947 command_runner.go:130] > runtime_config_path = ""
	I1007 11:26:29.080829   42947 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1007 11:26:29.080835   42947 command_runner.go:130] > monitor_cgroup = "pod"
	I1007 11:26:29.080839   42947 command_runner.go:130] > monitor_exec_cgroup = ""
	I1007 11:26:29.080844   42947 command_runner.go:130] > monitor_env = [
	I1007 11:26:29.080849   42947 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1007 11:26:29.080855   42947 command_runner.go:130] > ]
	I1007 11:26:29.080859   42947 command_runner.go:130] > privileged_without_host_devices = false
	I1007 11:26:29.080865   42947 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1007 11:26:29.080878   42947 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1007 11:26:29.080885   42947 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1007 11:26:29.080892   42947 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1007 11:26:29.080904   42947 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1007 11:26:29.080912   42947 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1007 11:26:29.080921   42947 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1007 11:26:29.080930   42947 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1007 11:26:29.080938   42947 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1007 11:26:29.080944   42947 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1007 11:26:29.080950   42947 command_runner.go:130] > # Example:
	I1007 11:26:29.080955   42947 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1007 11:26:29.080962   42947 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1007 11:26:29.080967   42947 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1007 11:26:29.080973   42947 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1007 11:26:29.080977   42947 command_runner.go:130] > # cpuset = 0
	I1007 11:26:29.080981   42947 command_runner.go:130] > # cpushares = "0-1"
	I1007 11:26:29.080985   42947 command_runner.go:130] > # Where:
	I1007 11:26:29.080989   42947 command_runner.go:130] > # The workload name is workload-type.
	I1007 11:26:29.080998   42947 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1007 11:26:29.081004   42947 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1007 11:26:29.081011   42947 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1007 11:26:29.081019   42947 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1007 11:26:29.081026   42947 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1007 11:26:29.081033   42947 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1007 11:26:29.081039   42947 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1007 11:26:29.081046   42947 command_runner.go:130] > # Default value is set to true
	I1007 11:26:29.081050   42947 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1007 11:26:29.081058   42947 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1007 11:26:29.081062   42947 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1007 11:26:29.081069   42947 command_runner.go:130] > # Default value is set to 'false'
	I1007 11:26:29.081073   42947 command_runner.go:130] > # disable_hostport_mapping = false
	I1007 11:26:29.081080   42947 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1007 11:26:29.081085   42947 command_runner.go:130] > #
	I1007 11:26:29.081090   42947 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1007 11:26:29.081096   42947 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1007 11:26:29.081102   42947 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1007 11:26:29.081107   42947 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1007 11:26:29.081114   42947 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1007 11:26:29.081119   42947 command_runner.go:130] > [crio.image]
	I1007 11:26:29.081124   42947 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1007 11:26:29.081128   42947 command_runner.go:130] > # default_transport = "docker://"
	I1007 11:26:29.081134   42947 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1007 11:26:29.081140   42947 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1007 11:26:29.081143   42947 command_runner.go:130] > # global_auth_file = ""
	I1007 11:26:29.081148   42947 command_runner.go:130] > # The image used to instantiate infra containers.
	I1007 11:26:29.081152   42947 command_runner.go:130] > # This option supports live configuration reload.
	I1007 11:26:29.081157   42947 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1007 11:26:29.081163   42947 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1007 11:26:29.081168   42947 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1007 11:26:29.081173   42947 command_runner.go:130] > # This option supports live configuration reload.
	I1007 11:26:29.081176   42947 command_runner.go:130] > # pause_image_auth_file = ""
	I1007 11:26:29.081183   42947 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1007 11:26:29.081188   42947 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1007 11:26:29.081193   42947 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1007 11:26:29.081199   42947 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1007 11:26:29.081202   42947 command_runner.go:130] > # pause_command = "/pause"
	I1007 11:26:29.081208   42947 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1007 11:26:29.081213   42947 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1007 11:26:29.081219   42947 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1007 11:26:29.081226   42947 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1007 11:26:29.081232   42947 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1007 11:26:29.081237   42947 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1007 11:26:29.081241   42947 command_runner.go:130] > # pinned_images = [
	I1007 11:26:29.081244   42947 command_runner.go:130] > # ]
	I1007 11:26:29.081249   42947 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1007 11:26:29.081255   42947 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1007 11:26:29.081261   42947 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1007 11:26:29.081266   42947 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1007 11:26:29.081271   42947 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1007 11:26:29.081277   42947 command_runner.go:130] > # signature_policy = ""
	I1007 11:26:29.081285   42947 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1007 11:26:29.081293   42947 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1007 11:26:29.081301   42947 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1007 11:26:29.081310   42947 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1007 11:26:29.081318   42947 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1007 11:26:29.081324   42947 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1007 11:26:29.081330   42947 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1007 11:26:29.081338   42947 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1007 11:26:29.081344   42947 command_runner.go:130] > # changing them here.
	I1007 11:26:29.081349   42947 command_runner.go:130] > # insecure_registries = [
	I1007 11:26:29.081358   42947 command_runner.go:130] > # ]
	I1007 11:26:29.081366   42947 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1007 11:26:29.081374   42947 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1007 11:26:29.081378   42947 command_runner.go:130] > # image_volumes = "mkdir"
	I1007 11:26:29.081385   42947 command_runner.go:130] > # Temporary directory to use for storing big files
	I1007 11:26:29.081389   42947 command_runner.go:130] > # big_files_temporary_dir = ""
	I1007 11:26:29.081398   42947 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1007 11:26:29.081405   42947 command_runner.go:130] > # CNI plugins.
	I1007 11:26:29.081408   42947 command_runner.go:130] > [crio.network]
	I1007 11:26:29.081416   42947 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1007 11:26:29.081423   42947 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1007 11:26:29.081427   42947 command_runner.go:130] > # cni_default_network = ""
	I1007 11:26:29.081435   42947 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1007 11:26:29.081442   42947 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1007 11:26:29.081450   42947 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1007 11:26:29.081455   42947 command_runner.go:130] > # plugin_dirs = [
	I1007 11:26:29.081459   42947 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1007 11:26:29.081464   42947 command_runner.go:130] > # ]
	I1007 11:26:29.081470   42947 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1007 11:26:29.081475   42947 command_runner.go:130] > [crio.metrics]
	I1007 11:26:29.081480   42947 command_runner.go:130] > # Globally enable or disable metrics support.
	I1007 11:26:29.081485   42947 command_runner.go:130] > enable_metrics = true
	I1007 11:26:29.081491   42947 command_runner.go:130] > # Specify enabled metrics collectors.
	I1007 11:26:29.081498   42947 command_runner.go:130] > # Per default all metrics are enabled.
	I1007 11:26:29.081509   42947 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1007 11:26:29.081520   42947 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1007 11:26:29.081528   42947 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1007 11:26:29.081533   42947 command_runner.go:130] > # metrics_collectors = [
	I1007 11:26:29.081536   42947 command_runner.go:130] > # 	"operations",
	I1007 11:26:29.081543   42947 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1007 11:26:29.081547   42947 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1007 11:26:29.081553   42947 command_runner.go:130] > # 	"operations_errors",
	I1007 11:26:29.081557   42947 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1007 11:26:29.081566   42947 command_runner.go:130] > # 	"image_pulls_by_name",
	I1007 11:26:29.081573   42947 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1007 11:26:29.081581   42947 command_runner.go:130] > # 	"image_pulls_failures",
	I1007 11:26:29.081587   42947 command_runner.go:130] > # 	"image_pulls_successes",
	I1007 11:26:29.081592   42947 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1007 11:26:29.081597   42947 command_runner.go:130] > # 	"image_layer_reuse",
	I1007 11:26:29.081602   42947 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1007 11:26:29.081608   42947 command_runner.go:130] > # 	"containers_oom_total",
	I1007 11:26:29.081612   42947 command_runner.go:130] > # 	"containers_oom",
	I1007 11:26:29.081618   42947 command_runner.go:130] > # 	"processes_defunct",
	I1007 11:26:29.081622   42947 command_runner.go:130] > # 	"operations_total",
	I1007 11:26:29.081628   42947 command_runner.go:130] > # 	"operations_latency_seconds",
	I1007 11:26:29.081632   42947 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1007 11:26:29.081638   42947 command_runner.go:130] > # 	"operations_errors_total",
	I1007 11:26:29.081642   42947 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1007 11:26:29.081649   42947 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1007 11:26:29.081653   42947 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1007 11:26:29.081658   42947 command_runner.go:130] > # 	"image_pulls_success_total",
	I1007 11:26:29.081665   42947 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1007 11:26:29.081672   42947 command_runner.go:130] > # 	"containers_oom_count_total",
	I1007 11:26:29.081679   42947 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1007 11:26:29.081683   42947 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1007 11:26:29.081688   42947 command_runner.go:130] > # ]
	I1007 11:26:29.081693   42947 command_runner.go:130] > # The port on which the metrics server will listen.
	I1007 11:26:29.081699   42947 command_runner.go:130] > # metrics_port = 9090
	I1007 11:26:29.081704   42947 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1007 11:26:29.081710   42947 command_runner.go:130] > # metrics_socket = ""
	I1007 11:26:29.081715   42947 command_runner.go:130] > # The certificate for the secure metrics server.
	I1007 11:26:29.081725   42947 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1007 11:26:29.081734   42947 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1007 11:26:29.081740   42947 command_runner.go:130] > # certificate on any modification event.
	I1007 11:26:29.081744   42947 command_runner.go:130] > # metrics_cert = ""
	I1007 11:26:29.081751   42947 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1007 11:26:29.081756   42947 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1007 11:26:29.081762   42947 command_runner.go:130] > # metrics_key = ""
	I1007 11:26:29.081768   42947 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1007 11:26:29.081773   42947 command_runner.go:130] > [crio.tracing]
	I1007 11:26:29.081778   42947 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1007 11:26:29.081784   42947 command_runner.go:130] > # enable_tracing = false
	I1007 11:26:29.081789   42947 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1007 11:26:29.081794   42947 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1007 11:26:29.081803   42947 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1007 11:26:29.081808   42947 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1007 11:26:29.081812   42947 command_runner.go:130] > # CRI-O NRI configuration.
	I1007 11:26:29.081820   42947 command_runner.go:130] > [crio.nri]
	I1007 11:26:29.081824   42947 command_runner.go:130] > # Globally enable or disable NRI.
	I1007 11:26:29.081828   42947 command_runner.go:130] > # enable_nri = false
	I1007 11:26:29.081835   42947 command_runner.go:130] > # NRI socket to listen on.
	I1007 11:26:29.081842   42947 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1007 11:26:29.081846   42947 command_runner.go:130] > # NRI plugin directory to use.
	I1007 11:26:29.081853   42947 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1007 11:26:29.081860   42947 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1007 11:26:29.081866   42947 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1007 11:26:29.081871   42947 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1007 11:26:29.081875   42947 command_runner.go:130] > # nri_disable_connections = false
	I1007 11:26:29.081883   42947 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1007 11:26:29.081887   42947 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1007 11:26:29.081897   42947 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1007 11:26:29.081901   42947 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1007 11:26:29.081907   42947 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1007 11:26:29.081911   42947 command_runner.go:130] > [crio.stats]
	I1007 11:26:29.081917   42947 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1007 11:26:29.081924   42947 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1007 11:26:29.081928   42947 command_runner.go:130] > # stats_collection_period = 0
	I1007 11:26:29.082047   42947 cni.go:84] Creating CNI manager for ""
	I1007 11:26:29.082061   42947 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 11:26:29.082070   42947 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:26:29.082088   42947 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-873106 NodeName:multinode-873106 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 11:26:29.082218   42947 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-873106"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:26:29.082278   42947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:26:29.092898   42947 command_runner.go:130] > kubeadm
	I1007 11:26:29.092921   42947 command_runner.go:130] > kubectl
	I1007 11:26:29.092925   42947 command_runner.go:130] > kubelet
	I1007 11:26:29.092942   42947 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:26:29.092990   42947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 11:26:29.103017   42947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1007 11:26:29.121792   42947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:26:29.139315   42947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1007 11:26:29.158566   42947 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I1007 11:26:29.162577   42947 command_runner.go:130] > 192.168.39.51	control-plane.minikube.internal
	I1007 11:26:29.162655   42947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:26:29.318385   42947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:26:29.347329   42947 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106 for IP: 192.168.39.51
	I1007 11:26:29.347348   42947 certs.go:194] generating shared ca certs ...
	I1007 11:26:29.347367   42947 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:26:29.347533   42947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 11:26:29.347573   42947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 11:26:29.347580   42947 certs.go:256] generating profile certs ...
	I1007 11:26:29.347649   42947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/client.key
	I1007 11:26:29.347915   42947 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/apiserver.key.8b7bf9e8
	I1007 11:26:29.347965   42947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/proxy-client.key
	I1007 11:26:29.347978   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 11:26:29.348025   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 11:26:29.348041   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 11:26:29.348054   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 11:26:29.348066   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 11:26:29.348079   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 11:26:29.348091   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 11:26:29.348102   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 11:26:29.348157   42947 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 11:26:29.348185   42947 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 11:26:29.348194   42947 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 11:26:29.348215   42947 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 11:26:29.348251   42947 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:26:29.348277   42947 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 11:26:29.348312   42947 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 11:26:29.348338   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 11:26:29.348351   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 11:26:29.348363   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:26:29.349002   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:26:29.395362   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 11:26:29.442350   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:26:29.477194   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 11:26:29.504304   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 11:26:29.546630   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 11:26:29.584892   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:26:29.635489   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 11:26:29.668380   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 11:26:29.696465   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 11:26:29.721690   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:26:29.746589   42947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:26:29.764874   42947 ssh_runner.go:195] Run: openssl version
	I1007 11:26:29.770848   42947 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1007 11:26:29.770926   42947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:26:29.782036   42947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:26:29.786872   42947 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:26:29.787114   42947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:26:29.787158   42947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:26:29.793234   42947 command_runner.go:130] > b5213941
	I1007 11:26:29.793296   42947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:26:29.803420   42947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 11:26:29.814560   42947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 11:26:29.819098   42947 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 11:26:29.819126   42947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 11:26:29.819160   42947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 11:26:29.825502   42947 command_runner.go:130] > 51391683
	I1007 11:26:29.825580   42947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 11:26:29.835035   42947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 11:26:29.846081   42947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 11:26:29.850480   42947 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 11:26:29.850582   42947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 11:26:29.850633   42947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 11:26:29.856351   42947 command_runner.go:130] > 3ec20f2e
	I1007 11:26:29.856518   42947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 11:26:29.866851   42947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:26:29.871475   42947 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:26:29.871502   42947 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1007 11:26:29.871510   42947 command_runner.go:130] > Device: 253,1	Inode: 7337000     Links: 1
	I1007 11:26:29.871520   42947 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1007 11:26:29.871529   42947 command_runner.go:130] > Access: 2024-10-07 11:19:47.284088729 +0000
	I1007 11:26:29.871536   42947 command_runner.go:130] > Modify: 2024-10-07 11:19:47.284088729 +0000
	I1007 11:26:29.871543   42947 command_runner.go:130] > Change: 2024-10-07 11:19:47.284088729 +0000
	I1007 11:26:29.871550   42947 command_runner.go:130] >  Birth: 2024-10-07 11:19:47.284088729 +0000
	I1007 11:26:29.871623   42947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 11:26:29.877324   42947 command_runner.go:130] > Certificate will not expire
	I1007 11:26:29.877509   42947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 11:26:29.883393   42947 command_runner.go:130] > Certificate will not expire
	I1007 11:26:29.883507   42947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 11:26:29.889508   42947 command_runner.go:130] > Certificate will not expire
	I1007 11:26:29.889601   42947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 11:26:29.895167   42947 command_runner.go:130] > Certificate will not expire
	I1007 11:26:29.895251   42947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 11:26:29.900737   42947 command_runner.go:130] > Certificate will not expire
	I1007 11:26:29.900931   42947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 11:26:29.906338   42947 command_runner.go:130] > Certificate will not expire
	I1007 11:26:29.906496   42947 kubeadm.go:392] StartCluster: {Name:multinode-873106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:multinode-873106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:26:29.906607   42947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 11:26:29.906668   42947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:26:29.943058   42947 command_runner.go:130] > fb1d8421da42a7887167bb3dfcee87d8b1927bd0cfdb27826f456b260650b7ae
	I1007 11:26:29.943098   42947 command_runner.go:130] > f3590c54e9965d4d724d2527908e35728496a93977b13476d79ab1d7e9448a3a
	I1007 11:26:29.943110   42947 command_runner.go:130] > c956258dc13d1fb534b779a8be5ed514ed82e53da7fbf9d9938c83f09db0db71
	I1007 11:26:29.943159   42947 command_runner.go:130] > 7a06720aace13ce689f397283c21f1d09ee33ff0b4580d1666878b9d29a7008b
	I1007 11:26:29.943171   42947 command_runner.go:130] > da82a1dafba5cfc5ce03d13bc0773af7458c0d722741bc0319262ae385cd7d2d
	I1007 11:26:29.943187   42947 command_runner.go:130] > edd0197acb1729fe1537ea8707c43578e9acf466574a81ee3e30c4417b15505d
	I1007 11:26:29.943198   42947 command_runner.go:130] > d8f06daea653405132f3538370db34a96f36c20dfe7594b52f1018c70fa55a84
	I1007 11:26:29.943321   42947 command_runner.go:130] > e93e7c28ad05a6b5f7458edc2f807f69cc00f61c2c9b2185e1dc46239ec54525
	I1007 11:26:29.943358   42947 command_runner.go:130] > 03a0eaccb2b60a7e10ef407ba77040a84c4210c709052030c365d7064fe3995f
	I1007 11:26:29.944722   42947 cri.go:89] found id: "fb1d8421da42a7887167bb3dfcee87d8b1927bd0cfdb27826f456b260650b7ae"
	I1007 11:26:29.944737   42947 cri.go:89] found id: "f3590c54e9965d4d724d2527908e35728496a93977b13476d79ab1d7e9448a3a"
	I1007 11:26:29.944740   42947 cri.go:89] found id: "c956258dc13d1fb534b779a8be5ed514ed82e53da7fbf9d9938c83f09db0db71"
	I1007 11:26:29.944744   42947 cri.go:89] found id: "7a06720aace13ce689f397283c21f1d09ee33ff0b4580d1666878b9d29a7008b"
	I1007 11:26:29.944747   42947 cri.go:89] found id: "da82a1dafba5cfc5ce03d13bc0773af7458c0d722741bc0319262ae385cd7d2d"
	I1007 11:26:29.944750   42947 cri.go:89] found id: "edd0197acb1729fe1537ea8707c43578e9acf466574a81ee3e30c4417b15505d"
	I1007 11:26:29.944752   42947 cri.go:89] found id: "d8f06daea653405132f3538370db34a96f36c20dfe7594b52f1018c70fa55a84"
	I1007 11:26:29.944754   42947 cri.go:89] found id: "e93e7c28ad05a6b5f7458edc2f807f69cc00f61c2c9b2185e1dc46239ec54525"
	I1007 11:26:29.944757   42947 cri.go:89] found id: "03a0eaccb2b60a7e10ef407ba77040a84c4210c709052030c365d7064fe3995f"
	I1007 11:26:29.944763   42947 cri.go:89] found id: ""
	I1007 11:26:29.944807   42947 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-873106 -n multinode-873106
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-873106 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (328.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 stop
E1007 11:29:36.387783   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:30:08.250412   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-873106 stop: exit status 82 (2m0.458065994s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-873106-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-873106 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-873106 status: (18.822208848s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-873106 status --alsologtostderr: (3.364415097s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-873106 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-873106 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-873106 -n multinode-873106
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-873106 logs -n 25: (2.072842002s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp multinode-873106-m02:/home/docker/cp-test.txt                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106:/home/docker/cp-test_multinode-873106-m02_multinode-873106.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n multinode-873106 sudo cat                                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-873106-m02_multinode-873106.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp multinode-873106-m02:/home/docker/cp-test.txt                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m03:/home/docker/cp-test_multinode-873106-m02_multinode-873106-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n multinode-873106-m03 sudo cat                                   | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-873106-m02_multinode-873106-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp testdata/cp-test.txt                                                | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp multinode-873106-m03:/home/docker/cp-test.txt                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2112677138/001/cp-test_multinode-873106-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp multinode-873106-m03:/home/docker/cp-test.txt                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106:/home/docker/cp-test_multinode-873106-m03_multinode-873106.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n multinode-873106 sudo cat                                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-873106-m03_multinode-873106.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-873106 cp multinode-873106-m03:/home/docker/cp-test.txt                       | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m02:/home/docker/cp-test_multinode-873106-m03_multinode-873106-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n                                                                 | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | multinode-873106-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-873106 ssh -n multinode-873106-m02 sudo cat                                   | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | /home/docker/cp-test_multinode-873106-m03_multinode-873106-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-873106 node stop m03                                                          | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	| node    | multinode-873106 node start                                                             | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC | 07 Oct 24 11:22 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-873106                                                                | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC |                     |
	| stop    | -p multinode-873106                                                                     | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:22 UTC |                     |
	| start   | -p multinode-873106                                                                     | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:24 UTC | 07 Oct 24 11:28 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-873106                                                                | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:28 UTC |                     |
	| node    | multinode-873106 node delete                                                            | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:28 UTC | 07 Oct 24 11:28 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-873106 stop                                                                   | multinode-873106 | jenkins | v1.34.0 | 07 Oct 24 11:28 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:24:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:24:55.493543   42947 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:24:55.493677   42947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:24:55.493688   42947 out.go:358] Setting ErrFile to fd 2...
	I1007 11:24:55.493699   42947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:24:55.493895   42947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 11:24:55.494430   42947 out.go:352] Setting JSON to false
	I1007 11:24:55.495309   42947 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3989,"bootTime":1728296306,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:24:55.495404   42947 start.go:139] virtualization: kvm guest
	I1007 11:24:55.497892   42947 out.go:177] * [multinode-873106] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:24:55.499575   42947 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 11:24:55.499580   42947 notify.go:220] Checking for updates...
	I1007 11:24:55.501078   42947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:24:55.502344   42947 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 11:24:55.503601   42947 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:24:55.504886   42947 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:24:55.506306   42947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:24:55.507904   42947 config.go:182] Loaded profile config "multinode-873106": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:24:55.508002   42947 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:24:55.508463   42947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:24:55.508527   42947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:24:55.525841   42947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I1007 11:24:55.526370   42947 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:24:55.526999   42947 main.go:141] libmachine: Using API Version  1
	I1007 11:24:55.527024   42947 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:24:55.527413   42947 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:24:55.527594   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:24:55.563347   42947 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 11:24:55.564602   42947 start.go:297] selected driver: kvm2
	I1007 11:24:55.564624   42947 start.go:901] validating driver "kvm2" against &{Name:multinode-873106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:multinode-873106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:24:55.564780   42947 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:24:55.565140   42947 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:24:55.565218   42947 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 11:24:55.580983   42947 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 11:24:55.581645   42947 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:24:55.581677   42947 cni.go:84] Creating CNI manager for ""
	I1007 11:24:55.581727   42947 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 11:24:55.581782   42947 start.go:340] cluster config:
	{Name:multinode-873106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-873106 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflo
w:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:24:55.581913   42947 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:24:55.583781   42947 out.go:177] * Starting "multinode-873106" primary control-plane node in "multinode-873106" cluster
	I1007 11:24:55.584979   42947 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:24:55.585014   42947 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 11:24:55.585022   42947 cache.go:56] Caching tarball of preloaded images
	I1007 11:24:55.585144   42947 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 11:24:55.585159   42947 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:24:55.585302   42947 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/config.json ...
	I1007 11:24:55.585544   42947 start.go:360] acquireMachinesLock for multinode-873106: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 11:24:55.585622   42947 start.go:364] duration metric: took 56.743µs to acquireMachinesLock for "multinode-873106"
	I1007 11:24:55.585641   42947 start.go:96] Skipping create...Using existing machine configuration
	I1007 11:24:55.585650   42947 fix.go:54] fixHost starting: 
	I1007 11:24:55.585948   42947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:24:55.585988   42947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:24:55.600773   42947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37399
	I1007 11:24:55.601123   42947 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:24:55.601595   42947 main.go:141] libmachine: Using API Version  1
	I1007 11:24:55.601620   42947 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:24:55.601943   42947 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:24:55.602128   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:24:55.602319   42947 main.go:141] libmachine: (multinode-873106) Calling .GetState
	I1007 11:24:55.604037   42947 fix.go:112] recreateIfNeeded on multinode-873106: state=Running err=<nil>
	W1007 11:24:55.604060   42947 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 11:24:55.605894   42947 out.go:177] * Updating the running kvm2 "multinode-873106" VM ...
	I1007 11:24:55.607122   42947 machine.go:93] provisionDockerMachine start ...
	I1007 11:24:55.607150   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:24:55.607346   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:24:55.609950   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.610379   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:55.610447   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.610518   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:24:55.610675   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:55.610792   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:55.610955   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:24:55.611133   42947 main.go:141] libmachine: Using SSH client type: native
	I1007 11:24:55.611349   42947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1007 11:24:55.611363   42947 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 11:24:55.725494   42947 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-873106
	
	I1007 11:24:55.725528   42947 main.go:141] libmachine: (multinode-873106) Calling .GetMachineName
	I1007 11:24:55.725777   42947 buildroot.go:166] provisioning hostname "multinode-873106"
	I1007 11:24:55.725800   42947 main.go:141] libmachine: (multinode-873106) Calling .GetMachineName
	I1007 11:24:55.726003   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:24:55.728777   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.729154   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:55.729174   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.729340   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:24:55.729525   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:55.729689   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:55.729825   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:24:55.729962   42947 main.go:141] libmachine: Using SSH client type: native
	I1007 11:24:55.730162   42947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1007 11:24:55.730186   42947 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-873106 && echo "multinode-873106" | sudo tee /etc/hostname
	I1007 11:24:55.857862   42947 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-873106
	
	I1007 11:24:55.857897   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:24:55.860664   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.861135   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:55.861166   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.861438   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:24:55.861644   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:55.861811   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:55.861932   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:24:55.862060   42947 main.go:141] libmachine: Using SSH client type: native
	I1007 11:24:55.862231   42947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1007 11:24:55.862247   42947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-873106' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-873106/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-873106' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:24:55.969357   42947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:24:55.969381   42947 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 11:24:55.969425   42947 buildroot.go:174] setting up certificates
	I1007 11:24:55.969445   42947 provision.go:84] configureAuth start
	I1007 11:24:55.969458   42947 main.go:141] libmachine: (multinode-873106) Calling .GetMachineName
	I1007 11:24:55.969722   42947 main.go:141] libmachine: (multinode-873106) Calling .GetIP
	I1007 11:24:55.972760   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.973125   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:55.973153   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.973290   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:24:55.975459   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.975788   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:55.975824   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:55.975937   42947 provision.go:143] copyHostCerts
	I1007 11:24:55.975968   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 11:24:55.976033   42947 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 11:24:55.976053   42947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 11:24:55.976122   42947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 11:24:55.976215   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 11:24:55.976233   42947 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 11:24:55.976237   42947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 11:24:55.976263   42947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 11:24:55.976320   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 11:24:55.976339   42947 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 11:24:55.976348   42947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 11:24:55.976370   42947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 11:24:55.976428   42947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.multinode-873106 san=[127.0.0.1 192.168.39.51 localhost minikube multinode-873106]
	I1007 11:24:56.115595   42947 provision.go:177] copyRemoteCerts
	I1007 11:24:56.115648   42947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:24:56.115669   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:24:56.118168   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:56.118490   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:56.118511   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:56.118677   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:24:56.118876   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:56.119043   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:24:56.119174   42947 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/multinode-873106/id_rsa Username:docker}
	I1007 11:24:56.205793   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 11:24:56.205861   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1007 11:24:56.234096   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 11:24:56.234164   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 11:24:56.267817   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 11:24:56.267897   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 11:24:56.295900   42947 provision.go:87] duration metric: took 326.442396ms to configureAuth
	I1007 11:24:56.295924   42947 buildroot.go:189] setting minikube options for container-runtime
	I1007 11:24:56.296149   42947 config.go:182] Loaded profile config "multinode-873106": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:24:56.296221   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:24:56.298827   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:56.299187   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:24:56.299216   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:24:56.299357   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:24:56.299583   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:56.299716   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:24:56.299877   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:24:56.300048   42947 main.go:141] libmachine: Using SSH client type: native
	I1007 11:24:56.300252   42947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1007 11:24:56.300268   42947 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:26:27.127297   42947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:26:27.127331   42947 machine.go:96] duration metric: took 1m31.520188095s to provisionDockerMachine
	I1007 11:26:27.127345   42947 start.go:293] postStartSetup for "multinode-873106" (driver="kvm2")
	I1007 11:26:27.127359   42947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:26:27.127379   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:26:27.127712   42947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:26:27.127744   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:26:27.131016   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.131435   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:26:27.131457   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.131588   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:26:27.131773   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:26:27.131906   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:26:27.132086   42947 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/multinode-873106/id_rsa Username:docker}
	I1007 11:26:27.216235   42947 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:26:27.220605   42947 command_runner.go:130] > NAME=Buildroot
	I1007 11:26:27.220629   42947 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1007 11:26:27.220635   42947 command_runner.go:130] > ID=buildroot
	I1007 11:26:27.220642   42947 command_runner.go:130] > VERSION_ID=2023.02.9
	I1007 11:26:27.220649   42947 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1007 11:26:27.220681   42947 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 11:26:27.220699   42947 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 11:26:27.220780   42947 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 11:26:27.220892   42947 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 11:26:27.220906   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /etc/ssl/certs/110962.pem
	I1007 11:26:27.221027   42947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 11:26:27.231135   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 11:26:27.255762   42947 start.go:296] duration metric: took 128.405146ms for postStartSetup
	I1007 11:26:27.255823   42947 fix.go:56] duration metric: took 1m31.670173136s for fixHost
	I1007 11:26:27.255846   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:26:27.258263   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.258541   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:26:27.258565   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.258699   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:26:27.258867   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:26:27.259001   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:26:27.259111   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:26:27.259224   42947 main.go:141] libmachine: Using SSH client type: native
	I1007 11:26:27.259459   42947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I1007 11:26:27.259472   42947 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 11:26:27.364830   42947 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728300387.322133753
	
	I1007 11:26:27.364850   42947 fix.go:216] guest clock: 1728300387.322133753
	I1007 11:26:27.364859   42947 fix.go:229] Guest: 2024-10-07 11:26:27.322133753 +0000 UTC Remote: 2024-10-07 11:26:27.255828163 +0000 UTC m=+91.800329531 (delta=66.30559ms)
	I1007 11:26:27.364885   42947 fix.go:200] guest clock delta is within tolerance: 66.30559ms
	I1007 11:26:27.364892   42947 start.go:83] releasing machines lock for "multinode-873106", held for 1m31.779257624s
	I1007 11:26:27.364915   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:26:27.365176   42947 main.go:141] libmachine: (multinode-873106) Calling .GetIP
	I1007 11:26:27.367610   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.368026   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:26:27.368058   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.368143   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:26:27.368627   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:26:27.368772   42947 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:26:27.368845   42947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:26:27.368892   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:26:27.368986   42947 ssh_runner.go:195] Run: cat /version.json
	I1007 11:26:27.369015   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:26:27.371692   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.371866   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.372147   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:26:27.372178   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.372245   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:26:27.372281   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:27.372365   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:26:27.372484   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:26:27.372610   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:26:27.372687   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:26:27.372716   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:26:27.372793   42947 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:26:27.372854   42947 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/multinode-873106/id_rsa Username:docker}
	I1007 11:26:27.372892   42947 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/multinode-873106/id_rsa Username:docker}
	I1007 11:26:27.449818   42947 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I1007 11:26:27.449979   42947 ssh_runner.go:195] Run: systemctl --version
	I1007 11:26:27.483080   42947 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1007 11:26:27.483143   42947 command_runner.go:130] > systemd 252 (252)
	I1007 11:26:27.483164   42947 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1007 11:26:27.483210   42947 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:26:27.644338   42947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 11:26:27.653180   42947 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1007 11:26:27.653218   42947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 11:26:27.653287   42947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:26:27.663150   42947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 11:26:27.663178   42947 start.go:495] detecting cgroup driver to use...
	I1007 11:26:27.663268   42947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:26:27.680472   42947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:26:27.696605   42947 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:26:27.696657   42947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:26:27.710624   42947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:26:27.724536   42947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:26:27.869033   42947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:26:28.012003   42947 docker.go:233] disabling docker service ...
	I1007 11:26:28.012072   42947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:26:28.029003   42947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:26:28.043285   42947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:26:28.189555   42947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:26:28.330397   42947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:26:28.344884   42947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:26:28.364754   42947 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1007 11:26:28.364801   42947 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:26:28.364854   42947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.375834   42947 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:26:28.375904   42947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.386429   42947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.396921   42947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.407390   42947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:26:28.418328   42947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.428902   42947 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.440494   42947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:26:28.451107   42947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:26:28.460691   42947 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1007 11:26:28.460788   42947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:26:28.470935   42947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:26:28.616051   42947 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:26:28.821053   42947 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:26:28.821130   42947 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:26:28.826296   42947 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1007 11:26:28.826318   42947 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1007 11:26:28.826331   42947 command_runner.go:130] > Device: 0,22	Inode: 1327        Links: 1
	I1007 11:26:28.826340   42947 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1007 11:26:28.826347   42947 command_runner.go:130] > Access: 2024-10-07 11:26:28.723434115 +0000
	I1007 11:26:28.826354   42947 command_runner.go:130] > Modify: 2024-10-07 11:26:28.668432078 +0000
	I1007 11:26:28.826361   42947 command_runner.go:130] > Change: 2024-10-07 11:26:28.668432078 +0000
	I1007 11:26:28.826367   42947 command_runner.go:130] >  Birth: -
	I1007 11:26:28.827044   42947 start.go:563] Will wait 60s for crictl version
	I1007 11:26:28.827113   42947 ssh_runner.go:195] Run: which crictl
	I1007 11:26:28.831302   42947 command_runner.go:130] > /usr/bin/crictl
	I1007 11:26:28.831369   42947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:26:28.867565   42947 command_runner.go:130] > Version:  0.1.0
	I1007 11:26:28.867586   42947 command_runner.go:130] > RuntimeName:  cri-o
	I1007 11:26:28.867591   42947 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1007 11:26:28.867596   42947 command_runner.go:130] > RuntimeApiVersion:  v1
	I1007 11:26:28.868824   42947 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 11:26:28.868885   42947 ssh_runner.go:195] Run: crio --version
	I1007 11:26:28.897709   42947 command_runner.go:130] > crio version 1.29.1
	I1007 11:26:28.897733   42947 command_runner.go:130] > Version:        1.29.1
	I1007 11:26:28.897742   42947 command_runner.go:130] > GitCommit:      unknown
	I1007 11:26:28.897748   42947 command_runner.go:130] > GitCommitDate:  unknown
	I1007 11:26:28.897754   42947 command_runner.go:130] > GitTreeState:   clean
	I1007 11:26:28.897763   42947 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1007 11:26:28.897771   42947 command_runner.go:130] > GoVersion:      go1.21.6
	I1007 11:26:28.897777   42947 command_runner.go:130] > Compiler:       gc
	I1007 11:26:28.897784   42947 command_runner.go:130] > Platform:       linux/amd64
	I1007 11:26:28.897791   42947 command_runner.go:130] > Linkmode:       dynamic
	I1007 11:26:28.897797   42947 command_runner.go:130] > BuildTags:      
	I1007 11:26:28.897804   42947 command_runner.go:130] >   containers_image_ostree_stub
	I1007 11:26:28.897812   42947 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1007 11:26:28.897818   42947 command_runner.go:130] >   btrfs_noversion
	I1007 11:26:28.897834   42947 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1007 11:26:28.897840   42947 command_runner.go:130] >   libdm_no_deferred_remove
	I1007 11:26:28.897846   42947 command_runner.go:130] >   seccomp
	I1007 11:26:28.897865   42947 command_runner.go:130] > LDFlags:          unknown
	I1007 11:26:28.897875   42947 command_runner.go:130] > SeccompEnabled:   true
	I1007 11:26:28.897882   42947 command_runner.go:130] > AppArmorEnabled:  false
	I1007 11:26:28.899185   42947 ssh_runner.go:195] Run: crio --version
	I1007 11:26:28.928555   42947 command_runner.go:130] > crio version 1.29.1
	I1007 11:26:28.928577   42947 command_runner.go:130] > Version:        1.29.1
	I1007 11:26:28.928583   42947 command_runner.go:130] > GitCommit:      unknown
	I1007 11:26:28.928587   42947 command_runner.go:130] > GitCommitDate:  unknown
	I1007 11:26:28.928591   42947 command_runner.go:130] > GitTreeState:   clean
	I1007 11:26:28.928597   42947 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1007 11:26:28.928600   42947 command_runner.go:130] > GoVersion:      go1.21.6
	I1007 11:26:28.928604   42947 command_runner.go:130] > Compiler:       gc
	I1007 11:26:28.928608   42947 command_runner.go:130] > Platform:       linux/amd64
	I1007 11:26:28.928612   42947 command_runner.go:130] > Linkmode:       dynamic
	I1007 11:26:28.928953   42947 command_runner.go:130] > BuildTags:      
	I1007 11:26:28.928977   42947 command_runner.go:130] >   containers_image_ostree_stub
	I1007 11:26:28.928987   42947 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1007 11:26:28.928994   42947 command_runner.go:130] >   btrfs_noversion
	I1007 11:26:28.929003   42947 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1007 11:26:28.929017   42947 command_runner.go:130] >   libdm_no_deferred_remove
	I1007 11:26:28.929789   42947 command_runner.go:130] >   seccomp
	I1007 11:26:28.929808   42947 command_runner.go:130] > LDFlags:          unknown
	I1007 11:26:28.929815   42947 command_runner.go:130] > SeccompEnabled:   true
	I1007 11:26:28.929821   42947 command_runner.go:130] > AppArmorEnabled:  false
	I1007 11:26:28.932740   42947 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 11:26:28.934121   42947 main.go:141] libmachine: (multinode-873106) Calling .GetIP
	I1007 11:26:28.936494   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:28.936803   42947 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:26:28.936839   42947 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:26:28.937035   42947 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 11:26:28.941456   42947 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1007 11:26:28.941527   42947 kubeadm.go:883] updating cluster {Name:multinode-873106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-873106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadg
et:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:26:28.941646   42947 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:26:28.941683   42947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:26:28.990132   42947 command_runner.go:130] > {
	I1007 11:26:28.990157   42947 command_runner.go:130] >   "images": [
	I1007 11:26:28.990161   42947 command_runner.go:130] >     {
	I1007 11:26:28.990169   42947 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1007 11:26:28.990175   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990180   42947 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1007 11:26:28.990185   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990189   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990198   42947 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1007 11:26:28.990205   42947 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1007 11:26:28.990209   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990213   42947 command_runner.go:130] >       "size": "87190579",
	I1007 11:26:28.990217   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:28.990221   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990225   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990230   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990233   42947 command_runner.go:130] >     },
	I1007 11:26:28.990236   42947 command_runner.go:130] >     {
	I1007 11:26:28.990242   42947 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1007 11:26:28.990248   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990254   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1007 11:26:28.990258   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990267   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990277   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1007 11:26:28.990291   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1007 11:26:28.990297   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990301   42947 command_runner.go:130] >       "size": "1363676",
	I1007 11:26:28.990304   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:28.990312   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990316   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990323   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990326   42947 command_runner.go:130] >     },
	I1007 11:26:28.990330   42947 command_runner.go:130] >     {
	I1007 11:26:28.990338   42947 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1007 11:26:28.990342   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990347   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1007 11:26:28.990353   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990356   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990363   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1007 11:26:28.990373   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1007 11:26:28.990377   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990383   42947 command_runner.go:130] >       "size": "31470524",
	I1007 11:26:28.990387   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:28.990393   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990396   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990400   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990405   42947 command_runner.go:130] >     },
	I1007 11:26:28.990408   42947 command_runner.go:130] >     {
	I1007 11:26:28.990414   42947 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1007 11:26:28.990420   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990425   42947 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1007 11:26:28.990430   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990434   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990440   42947 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1007 11:26:28.990452   42947 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1007 11:26:28.990463   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990469   42947 command_runner.go:130] >       "size": "63273227",
	I1007 11:26:28.990473   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:28.990477   42947 command_runner.go:130] >       "username": "nonroot",
	I1007 11:26:28.990481   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990485   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990488   42947 command_runner.go:130] >     },
	I1007 11:26:28.990491   42947 command_runner.go:130] >     {
	I1007 11:26:28.990497   42947 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1007 11:26:28.990503   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990508   42947 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1007 11:26:28.990513   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990517   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990523   42947 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1007 11:26:28.990530   42947 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1007 11:26:28.990537   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990542   42947 command_runner.go:130] >       "size": "149009664",
	I1007 11:26:28.990545   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:28.990549   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:28.990553   42947 command_runner.go:130] >       },
	I1007 11:26:28.990557   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990560   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990564   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990567   42947 command_runner.go:130] >     },
	I1007 11:26:28.990571   42947 command_runner.go:130] >     {
	I1007 11:26:28.990577   42947 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1007 11:26:28.990581   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990586   42947 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1007 11:26:28.990592   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990595   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990603   42947 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1007 11:26:28.990612   42947 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1007 11:26:28.990615   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990624   42947 command_runner.go:130] >       "size": "95237600",
	I1007 11:26:28.990630   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:28.990634   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:28.990637   42947 command_runner.go:130] >       },
	I1007 11:26:28.990641   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990647   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990651   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990654   42947 command_runner.go:130] >     },
	I1007 11:26:28.990657   42947 command_runner.go:130] >     {
	I1007 11:26:28.990663   42947 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1007 11:26:28.990670   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990675   42947 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1007 11:26:28.990681   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990685   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990692   42947 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1007 11:26:28.990702   42947 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1007 11:26:28.990706   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990710   42947 command_runner.go:130] >       "size": "89437508",
	I1007 11:26:28.990713   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:28.990717   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:28.990720   42947 command_runner.go:130] >       },
	I1007 11:26:28.990724   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990728   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990732   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990735   42947 command_runner.go:130] >     },
	I1007 11:26:28.990738   42947 command_runner.go:130] >     {
	I1007 11:26:28.990744   42947 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1007 11:26:28.990750   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990754   42947 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1007 11:26:28.990758   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990761   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990773   42947 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1007 11:26:28.990782   42947 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1007 11:26:28.990786   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990790   42947 command_runner.go:130] >       "size": "92733849",
	I1007 11:26:28.990796   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:28.990800   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990806   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990809   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990813   42947 command_runner.go:130] >     },
	I1007 11:26:28.990816   42947 command_runner.go:130] >     {
	I1007 11:26:28.990821   42947 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1007 11:26:28.990825   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990829   42947 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1007 11:26:28.990832   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990836   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990842   42947 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1007 11:26:28.990849   42947 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1007 11:26:28.990852   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990856   42947 command_runner.go:130] >       "size": "68420934",
	I1007 11:26:28.990860   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:28.990863   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:28.990866   42947 command_runner.go:130] >       },
	I1007 11:26:28.990870   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990873   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990877   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:28.990880   42947 command_runner.go:130] >     },
	I1007 11:26:28.990883   42947 command_runner.go:130] >     {
	I1007 11:26:28.990888   42947 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1007 11:26:28.990892   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:28.990895   42947 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1007 11:26:28.990899   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990902   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:28.990909   42947 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1007 11:26:28.990915   42947 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1007 11:26:28.990918   42947 command_runner.go:130] >       ],
	I1007 11:26:28.990922   42947 command_runner.go:130] >       "size": "742080",
	I1007 11:26:28.990925   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:28.990929   42947 command_runner.go:130] >         "value": "65535"
	I1007 11:26:28.990933   42947 command_runner.go:130] >       },
	I1007 11:26:28.990937   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:28.990941   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:28.990945   42947 command_runner.go:130] >       "pinned": true
	I1007 11:26:28.990948   42947 command_runner.go:130] >     }
	I1007 11:26:28.990951   42947 command_runner.go:130] >   ]
	I1007 11:26:28.990954   42947 command_runner.go:130] > }
	I1007 11:26:28.991119   42947 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:26:28.991132   42947 crio.go:433] Images already preloaded, skipping extraction
	I1007 11:26:28.991172   42947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:26:29.027241   42947 command_runner.go:130] > {
	I1007 11:26:29.027267   42947 command_runner.go:130] >   "images": [
	I1007 11:26:29.027271   42947 command_runner.go:130] >     {
	I1007 11:26:29.027280   42947 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1007 11:26:29.027298   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027305   42947 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1007 11:26:29.027308   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027312   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027322   42947 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1007 11:26:29.027329   42947 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1007 11:26:29.027332   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027336   42947 command_runner.go:130] >       "size": "87190579",
	I1007 11:26:29.027341   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:29.027345   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027355   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027360   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027363   42947 command_runner.go:130] >     },
	I1007 11:26:29.027367   42947 command_runner.go:130] >     {
	I1007 11:26:29.027382   42947 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1007 11:26:29.027389   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027394   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1007 11:26:29.027397   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027401   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027408   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1007 11:26:29.027417   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1007 11:26:29.027421   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027425   42947 command_runner.go:130] >       "size": "1363676",
	I1007 11:26:29.027429   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:29.027436   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027442   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027447   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027450   42947 command_runner.go:130] >     },
	I1007 11:26:29.027453   42947 command_runner.go:130] >     {
	I1007 11:26:29.027461   42947 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1007 11:26:29.027465   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027470   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1007 11:26:29.027475   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027478   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027486   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1007 11:26:29.027495   42947 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1007 11:26:29.027499   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027503   42947 command_runner.go:130] >       "size": "31470524",
	I1007 11:26:29.027507   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:29.027511   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027521   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027525   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027528   42947 command_runner.go:130] >     },
	I1007 11:26:29.027531   42947 command_runner.go:130] >     {
	I1007 11:26:29.027537   42947 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1007 11:26:29.027544   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027548   42947 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1007 11:26:29.027559   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027565   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027573   42947 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1007 11:26:29.027587   42947 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1007 11:26:29.027592   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027596   42947 command_runner.go:130] >       "size": "63273227",
	I1007 11:26:29.027602   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:29.027606   42947 command_runner.go:130] >       "username": "nonroot",
	I1007 11:26:29.027614   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027618   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027622   42947 command_runner.go:130] >     },
	I1007 11:26:29.027625   42947 command_runner.go:130] >     {
	I1007 11:26:29.027631   42947 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1007 11:26:29.027637   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027642   42947 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1007 11:26:29.027646   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027650   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027658   42947 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1007 11:26:29.027667   42947 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1007 11:26:29.027671   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027675   42947 command_runner.go:130] >       "size": "149009664",
	I1007 11:26:29.027678   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:29.027682   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:29.027685   42947 command_runner.go:130] >       },
	I1007 11:26:29.027689   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027693   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027697   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027701   42947 command_runner.go:130] >     },
	I1007 11:26:29.027704   42947 command_runner.go:130] >     {
	I1007 11:26:29.027709   42947 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1007 11:26:29.027720   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027724   42947 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1007 11:26:29.027730   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027739   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027748   42947 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1007 11:26:29.027755   42947 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1007 11:26:29.027760   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027764   42947 command_runner.go:130] >       "size": "95237600",
	I1007 11:26:29.027767   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:29.027771   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:29.027777   42947 command_runner.go:130] >       },
	I1007 11:26:29.027781   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027785   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027788   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027792   42947 command_runner.go:130] >     },
	I1007 11:26:29.027795   42947 command_runner.go:130] >     {
	I1007 11:26:29.027800   42947 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1007 11:26:29.027807   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027811   42947 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1007 11:26:29.027815   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027819   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027826   42947 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1007 11:26:29.027836   42947 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1007 11:26:29.027841   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027847   42947 command_runner.go:130] >       "size": "89437508",
	I1007 11:26:29.027851   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:29.027854   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:29.027858   42947 command_runner.go:130] >       },
	I1007 11:26:29.027862   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027866   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027869   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027872   42947 command_runner.go:130] >     },
	I1007 11:26:29.027876   42947 command_runner.go:130] >     {
	I1007 11:26:29.027881   42947 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1007 11:26:29.027887   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027892   42947 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1007 11:26:29.027900   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027906   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.027924   42947 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1007 11:26:29.027933   42947 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1007 11:26:29.027936   42947 command_runner.go:130] >       ],
	I1007 11:26:29.027940   42947 command_runner.go:130] >       "size": "92733849",
	I1007 11:26:29.027943   42947 command_runner.go:130] >       "uid": null,
	I1007 11:26:29.027947   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.027951   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.027954   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.027958   42947 command_runner.go:130] >     },
	I1007 11:26:29.027961   42947 command_runner.go:130] >     {
	I1007 11:26:29.027968   42947 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1007 11:26:29.027973   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.027978   42947 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1007 11:26:29.028000   42947 command_runner.go:130] >       ],
	I1007 11:26:29.028004   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.028011   42947 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1007 11:26:29.028018   42947 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1007 11:26:29.028022   42947 command_runner.go:130] >       ],
	I1007 11:26:29.028026   42947 command_runner.go:130] >       "size": "68420934",
	I1007 11:26:29.028029   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:29.028033   42947 command_runner.go:130] >         "value": "0"
	I1007 11:26:29.028037   42947 command_runner.go:130] >       },
	I1007 11:26:29.028040   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.028044   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.028048   42947 command_runner.go:130] >       "pinned": false
	I1007 11:26:29.028051   42947 command_runner.go:130] >     },
	I1007 11:26:29.028054   42947 command_runner.go:130] >     {
	I1007 11:26:29.028060   42947 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1007 11:26:29.028066   42947 command_runner.go:130] >       "repoTags": [
	I1007 11:26:29.028070   42947 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1007 11:26:29.028073   42947 command_runner.go:130] >       ],
	I1007 11:26:29.028083   42947 command_runner.go:130] >       "repoDigests": [
	I1007 11:26:29.028092   42947 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1007 11:26:29.028101   42947 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1007 11:26:29.028106   42947 command_runner.go:130] >       ],
	I1007 11:26:29.028110   42947 command_runner.go:130] >       "size": "742080",
	I1007 11:26:29.028113   42947 command_runner.go:130] >       "uid": {
	I1007 11:26:29.028117   42947 command_runner.go:130] >         "value": "65535"
	I1007 11:26:29.028120   42947 command_runner.go:130] >       },
	I1007 11:26:29.028124   42947 command_runner.go:130] >       "username": "",
	I1007 11:26:29.028128   42947 command_runner.go:130] >       "spec": null,
	I1007 11:26:29.028132   42947 command_runner.go:130] >       "pinned": true
	I1007 11:26:29.028135   42947 command_runner.go:130] >     }
	I1007 11:26:29.028138   42947 command_runner.go:130] >   ]
	I1007 11:26:29.028141   42947 command_runner.go:130] > }
	I1007 11:26:29.028808   42947 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:26:29.028822   42947 cache_images.go:84] Images are preloaded, skipping loading
	I1007 11:26:29.028830   42947 kubeadm.go:934] updating node { 192.168.39.51 8443 v1.31.1 crio true true} ...
	I1007 11:26:29.028930   42947 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-873106 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-873106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:26:29.028992   42947 ssh_runner.go:195] Run: crio config
	I1007 11:26:29.064745   42947 command_runner.go:130] ! time="2024-10-07 11:26:29.021941177Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1007 11:26:29.071512   42947 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1007 11:26:29.078279   42947 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1007 11:26:29.078305   42947 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1007 11:26:29.078316   42947 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1007 11:26:29.078321   42947 command_runner.go:130] > #
	I1007 11:26:29.078331   42947 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1007 11:26:29.078341   42947 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1007 11:26:29.078350   42947 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1007 11:26:29.078361   42947 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1007 11:26:29.078367   42947 command_runner.go:130] > # reload'.
	I1007 11:26:29.078375   42947 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1007 11:26:29.078383   42947 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1007 11:26:29.078395   42947 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1007 11:26:29.078405   42947 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1007 11:26:29.078418   42947 command_runner.go:130] > [crio]
	I1007 11:26:29.078429   42947 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1007 11:26:29.078434   42947 command_runner.go:130] > # containers images, in this directory.
	I1007 11:26:29.078439   42947 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1007 11:26:29.078448   42947 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1007 11:26:29.078455   42947 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1007 11:26:29.078462   42947 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1007 11:26:29.078467   42947 command_runner.go:130] > # imagestore = ""
	I1007 11:26:29.078474   42947 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1007 11:26:29.078479   42947 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1007 11:26:29.078484   42947 command_runner.go:130] > storage_driver = "overlay"
	I1007 11:26:29.078490   42947 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1007 11:26:29.078497   42947 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1007 11:26:29.078500   42947 command_runner.go:130] > storage_option = [
	I1007 11:26:29.078510   42947 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1007 11:26:29.078515   42947 command_runner.go:130] > ]
	I1007 11:26:29.078520   42947 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1007 11:26:29.078529   42947 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1007 11:26:29.078535   42947 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1007 11:26:29.078542   42947 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1007 11:26:29.078548   42947 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1007 11:26:29.078554   42947 command_runner.go:130] > # always happen on a node reboot
	I1007 11:26:29.078559   42947 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1007 11:26:29.078570   42947 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1007 11:26:29.078578   42947 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1007 11:26:29.078584   42947 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1007 11:26:29.078589   42947 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1007 11:26:29.078596   42947 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1007 11:26:29.078605   42947 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1007 11:26:29.078609   42947 command_runner.go:130] > # internal_wipe = true
	I1007 11:26:29.078626   42947 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1007 11:26:29.078637   42947 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1007 11:26:29.078641   42947 command_runner.go:130] > # internal_repair = false
	I1007 11:26:29.078646   42947 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1007 11:26:29.078655   42947 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1007 11:26:29.078660   42947 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1007 11:26:29.078667   42947 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1007 11:26:29.078676   42947 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1007 11:26:29.078681   42947 command_runner.go:130] > [crio.api]
	I1007 11:26:29.078687   42947 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1007 11:26:29.078694   42947 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1007 11:26:29.078699   42947 command_runner.go:130] > # IP address on which the stream server will listen.
	I1007 11:26:29.078706   42947 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1007 11:26:29.078712   42947 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1007 11:26:29.078719   42947 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1007 11:26:29.078723   42947 command_runner.go:130] > # stream_port = "0"
	I1007 11:26:29.078730   42947 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1007 11:26:29.078734   42947 command_runner.go:130] > # stream_enable_tls = false
	I1007 11:26:29.078742   42947 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1007 11:26:29.078745   42947 command_runner.go:130] > # stream_idle_timeout = ""
	I1007 11:26:29.078751   42947 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1007 11:26:29.078760   42947 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1007 11:26:29.078764   42947 command_runner.go:130] > # minutes.
	I1007 11:26:29.078770   42947 command_runner.go:130] > # stream_tls_cert = ""
	I1007 11:26:29.078776   42947 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1007 11:26:29.078782   42947 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1007 11:26:29.078787   42947 command_runner.go:130] > # stream_tls_key = ""
	I1007 11:26:29.078793   42947 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1007 11:26:29.078801   42947 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1007 11:26:29.078814   42947 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1007 11:26:29.078820   42947 command_runner.go:130] > # stream_tls_ca = ""
	I1007 11:26:29.078827   42947 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1007 11:26:29.078833   42947 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1007 11:26:29.078840   42947 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1007 11:26:29.078844   42947 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1007 11:26:29.078850   42947 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1007 11:26:29.078861   42947 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1007 11:26:29.078867   42947 command_runner.go:130] > [crio.runtime]
	I1007 11:26:29.078873   42947 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1007 11:26:29.078879   42947 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1007 11:26:29.078883   42947 command_runner.go:130] > # "nofile=1024:2048"
	I1007 11:26:29.078891   42947 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1007 11:26:29.078908   42947 command_runner.go:130] > # default_ulimits = [
	I1007 11:26:29.078916   42947 command_runner.go:130] > # ]
	I1007 11:26:29.078921   42947 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1007 11:26:29.078926   42947 command_runner.go:130] > # no_pivot = false
	I1007 11:26:29.078934   42947 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1007 11:26:29.078943   42947 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1007 11:26:29.078947   42947 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1007 11:26:29.078953   42947 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1007 11:26:29.078958   42947 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1007 11:26:29.078967   42947 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1007 11:26:29.078971   42947 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1007 11:26:29.078977   42947 command_runner.go:130] > # Cgroup setting for conmon
	I1007 11:26:29.078984   42947 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1007 11:26:29.078990   42947 command_runner.go:130] > conmon_cgroup = "pod"
	I1007 11:26:29.078997   42947 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1007 11:26:29.079003   42947 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1007 11:26:29.079009   42947 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1007 11:26:29.079015   42947 command_runner.go:130] > conmon_env = [
	I1007 11:26:29.079021   42947 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1007 11:26:29.079024   42947 command_runner.go:130] > ]
	I1007 11:26:29.079029   42947 command_runner.go:130] > # Additional environment variables to set for all the
	I1007 11:26:29.079036   42947 command_runner.go:130] > # containers. These are overridden if set in the
	I1007 11:26:29.079041   42947 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1007 11:26:29.079047   42947 command_runner.go:130] > # default_env = [
	I1007 11:26:29.079051   42947 command_runner.go:130] > # ]
	I1007 11:26:29.079058   42947 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1007 11:26:29.079066   42947 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1007 11:26:29.079071   42947 command_runner.go:130] > # selinux = false
	I1007 11:26:29.079077   42947 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1007 11:26:29.079085   42947 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1007 11:26:29.079093   42947 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1007 11:26:29.079097   42947 command_runner.go:130] > # seccomp_profile = ""
	I1007 11:26:29.079104   42947 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1007 11:26:29.079110   42947 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1007 11:26:29.079117   42947 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1007 11:26:29.079121   42947 command_runner.go:130] > # which might increase security.
	I1007 11:26:29.079128   42947 command_runner.go:130] > # This option is currently deprecated,
	I1007 11:26:29.079134   42947 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1007 11:26:29.079140   42947 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1007 11:26:29.079145   42947 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1007 11:26:29.079151   42947 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1007 11:26:29.079163   42947 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1007 11:26:29.079172   42947 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1007 11:26:29.079179   42947 command_runner.go:130] > # This option supports live configuration reload.
	I1007 11:26:29.079184   42947 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1007 11:26:29.079192   42947 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1007 11:26:29.079196   42947 command_runner.go:130] > # the cgroup blockio controller.
	I1007 11:26:29.079202   42947 command_runner.go:130] > # blockio_config_file = ""
	I1007 11:26:29.079209   42947 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1007 11:26:29.079215   42947 command_runner.go:130] > # blockio parameters.
	I1007 11:26:29.079219   42947 command_runner.go:130] > # blockio_reload = false
	I1007 11:26:29.079227   42947 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1007 11:26:29.079233   42947 command_runner.go:130] > # irqbalance daemon.
	I1007 11:26:29.079237   42947 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1007 11:26:29.079245   42947 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1007 11:26:29.079253   42947 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1007 11:26:29.079270   42947 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1007 11:26:29.079275   42947 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1007 11:26:29.079283   42947 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1007 11:26:29.079288   42947 command_runner.go:130] > # This option supports live configuration reload.
	I1007 11:26:29.079293   42947 command_runner.go:130] > # rdt_config_file = ""
	I1007 11:26:29.079299   42947 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1007 11:26:29.079305   42947 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1007 11:26:29.079324   42947 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1007 11:26:29.079330   42947 command_runner.go:130] > # separate_pull_cgroup = ""
	I1007 11:26:29.079339   42947 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1007 11:26:29.079347   42947 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1007 11:26:29.079353   42947 command_runner.go:130] > # will be added.
	I1007 11:26:29.079357   42947 command_runner.go:130] > # default_capabilities = [
	I1007 11:26:29.079363   42947 command_runner.go:130] > # 	"CHOWN",
	I1007 11:26:29.079367   42947 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1007 11:26:29.079371   42947 command_runner.go:130] > # 	"FSETID",
	I1007 11:26:29.079375   42947 command_runner.go:130] > # 	"FOWNER",
	I1007 11:26:29.079381   42947 command_runner.go:130] > # 	"SETGID",
	I1007 11:26:29.079384   42947 command_runner.go:130] > # 	"SETUID",
	I1007 11:26:29.079388   42947 command_runner.go:130] > # 	"SETPCAP",
	I1007 11:26:29.079392   42947 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1007 11:26:29.079398   42947 command_runner.go:130] > # 	"KILL",
	I1007 11:26:29.079402   42947 command_runner.go:130] > # ]
	I1007 11:26:29.079411   42947 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1007 11:26:29.079419   42947 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1007 11:26:29.079424   42947 command_runner.go:130] > # add_inheritable_capabilities = false
	I1007 11:26:29.079432   42947 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1007 11:26:29.079440   42947 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1007 11:26:29.079446   42947 command_runner.go:130] > default_sysctls = [
	I1007 11:26:29.079450   42947 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1007 11:26:29.079453   42947 command_runner.go:130] > ]
	I1007 11:26:29.079458   42947 command_runner.go:130] > # List of devices on the host that a
	I1007 11:26:29.079466   42947 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1007 11:26:29.079470   42947 command_runner.go:130] > # allowed_devices = [
	I1007 11:26:29.079476   42947 command_runner.go:130] > # 	"/dev/fuse",
	I1007 11:26:29.079479   42947 command_runner.go:130] > # ]
	I1007 11:26:29.079485   42947 command_runner.go:130] > # List of additional devices. specified as
	I1007 11:26:29.079492   42947 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1007 11:26:29.079499   42947 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1007 11:26:29.079505   42947 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1007 11:26:29.079515   42947 command_runner.go:130] > # additional_devices = [
	I1007 11:26:29.079519   42947 command_runner.go:130] > # ]
	I1007 11:26:29.079525   42947 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1007 11:26:29.079530   42947 command_runner.go:130] > # cdi_spec_dirs = [
	I1007 11:26:29.079534   42947 command_runner.go:130] > # 	"/etc/cdi",
	I1007 11:26:29.079538   42947 command_runner.go:130] > # 	"/var/run/cdi",
	I1007 11:26:29.079543   42947 command_runner.go:130] > # ]
	I1007 11:26:29.079549   42947 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1007 11:26:29.079556   42947 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1007 11:26:29.079563   42947 command_runner.go:130] > # Defaults to false.
	I1007 11:26:29.079568   42947 command_runner.go:130] > # device_ownership_from_security_context = false
	I1007 11:26:29.079576   42947 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1007 11:26:29.079583   42947 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1007 11:26:29.079590   42947 command_runner.go:130] > # hooks_dir = [
	I1007 11:26:29.079594   42947 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1007 11:26:29.079600   42947 command_runner.go:130] > # ]
	I1007 11:26:29.079606   42947 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1007 11:26:29.079615   42947 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1007 11:26:29.079621   42947 command_runner.go:130] > # its default mounts from the following two files:
	I1007 11:26:29.079624   42947 command_runner.go:130] > #
	I1007 11:26:29.079630   42947 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1007 11:26:29.079638   42947 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1007 11:26:29.079645   42947 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1007 11:26:29.079648   42947 command_runner.go:130] > #
	I1007 11:26:29.079654   42947 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1007 11:26:29.079662   42947 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1007 11:26:29.079669   42947 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1007 11:26:29.079678   42947 command_runner.go:130] > #      only add mounts it finds in this file.
	I1007 11:26:29.079683   42947 command_runner.go:130] > #
	I1007 11:26:29.079687   42947 command_runner.go:130] > # default_mounts_file = ""
	I1007 11:26:29.079694   42947 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1007 11:26:29.079700   42947 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1007 11:26:29.079706   42947 command_runner.go:130] > pids_limit = 1024
	I1007 11:26:29.079713   42947 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1007 11:26:29.079720   42947 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1007 11:26:29.079728   42947 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1007 11:26:29.079739   42947 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1007 11:26:29.079745   42947 command_runner.go:130] > # log_size_max = -1
	I1007 11:26:29.079752   42947 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1007 11:26:29.079758   42947 command_runner.go:130] > # log_to_journald = false
	I1007 11:26:29.079764   42947 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1007 11:26:29.079771   42947 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1007 11:26:29.079776   42947 command_runner.go:130] > # Path to directory for container attach sockets.
	I1007 11:26:29.079784   42947 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1007 11:26:29.079789   42947 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1007 11:26:29.079795   42947 command_runner.go:130] > # bind_mount_prefix = ""
	I1007 11:26:29.079800   42947 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1007 11:26:29.079806   42947 command_runner.go:130] > # read_only = false
	I1007 11:26:29.079813   42947 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1007 11:26:29.079821   42947 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1007 11:26:29.079825   42947 command_runner.go:130] > # live configuration reload.
	I1007 11:26:29.079831   42947 command_runner.go:130] > # log_level = "info"
	I1007 11:26:29.079836   42947 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1007 11:26:29.079843   42947 command_runner.go:130] > # This option supports live configuration reload.
	I1007 11:26:29.079847   42947 command_runner.go:130] > # log_filter = ""
	I1007 11:26:29.079852   42947 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1007 11:26:29.079861   42947 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1007 11:26:29.079865   42947 command_runner.go:130] > # separated by comma.
	I1007 11:26:29.079874   42947 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 11:26:29.079878   42947 command_runner.go:130] > # uid_mappings = ""
	I1007 11:26:29.079884   42947 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1007 11:26:29.079891   42947 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1007 11:26:29.079895   42947 command_runner.go:130] > # separated by comma.
	I1007 11:26:29.079904   42947 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 11:26:29.079911   42947 command_runner.go:130] > # gid_mappings = ""
	I1007 11:26:29.079919   42947 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1007 11:26:29.079925   42947 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1007 11:26:29.079933   42947 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1007 11:26:29.079942   42947 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 11:26:29.079948   42947 command_runner.go:130] > # minimum_mappable_uid = -1
	I1007 11:26:29.079954   42947 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1007 11:26:29.079962   42947 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1007 11:26:29.079968   42947 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1007 11:26:29.079978   42947 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 11:26:29.080001   42947 command_runner.go:130] > # minimum_mappable_gid = -1
	I1007 11:26:29.080010   42947 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1007 11:26:29.080022   42947 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1007 11:26:29.080029   42947 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1007 11:26:29.080034   42947 command_runner.go:130] > # ctr_stop_timeout = 30
	I1007 11:26:29.080039   42947 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1007 11:26:29.080047   42947 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1007 11:26:29.080053   42947 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1007 11:26:29.080060   42947 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1007 11:26:29.080064   42947 command_runner.go:130] > drop_infra_ctr = false
	I1007 11:26:29.080072   42947 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1007 11:26:29.080079   42947 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1007 11:26:29.080086   42947 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1007 11:26:29.080093   42947 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1007 11:26:29.080099   42947 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1007 11:26:29.080107   42947 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1007 11:26:29.080112   42947 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1007 11:26:29.080119   42947 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1007 11:26:29.080123   42947 command_runner.go:130] > # shared_cpuset = ""
	I1007 11:26:29.080131   42947 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1007 11:26:29.080136   42947 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1007 11:26:29.080140   42947 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1007 11:26:29.080147   42947 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1007 11:26:29.080153   42947 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1007 11:26:29.080158   42947 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1007 11:26:29.080169   42947 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1007 11:26:29.080175   42947 command_runner.go:130] > # enable_criu_support = false
	I1007 11:26:29.080181   42947 command_runner.go:130] > # Enable/disable the generation of the container,
	I1007 11:26:29.080189   42947 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1007 11:26:29.080196   42947 command_runner.go:130] > # enable_pod_events = false
	I1007 11:26:29.080201   42947 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1007 11:26:29.080209   42947 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1007 11:26:29.080216   42947 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1007 11:26:29.080220   42947 command_runner.go:130] > # default_runtime = "runc"
	I1007 11:26:29.080227   42947 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1007 11:26:29.080234   42947 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1007 11:26:29.080244   42947 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1007 11:26:29.080251   42947 command_runner.go:130] > # creation as a file is not desired either.
	I1007 11:26:29.080259   42947 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1007 11:26:29.080266   42947 command_runner.go:130] > # the hostname is being managed dynamically.
	I1007 11:26:29.080271   42947 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1007 11:26:29.080277   42947 command_runner.go:130] > # ]
	I1007 11:26:29.080282   42947 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1007 11:26:29.080290   42947 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1007 11:26:29.080298   42947 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1007 11:26:29.080303   42947 command_runner.go:130] > # Each entry in the table should follow the format:
	I1007 11:26:29.080309   42947 command_runner.go:130] > #
	I1007 11:26:29.080314   42947 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1007 11:26:29.080320   42947 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1007 11:26:29.080342   42947 command_runner.go:130] > # runtime_type = "oci"
	I1007 11:26:29.080348   42947 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1007 11:26:29.080353   42947 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1007 11:26:29.080360   42947 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1007 11:26:29.080364   42947 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1007 11:26:29.080369   42947 command_runner.go:130] > # monitor_env = []
	I1007 11:26:29.080374   42947 command_runner.go:130] > # privileged_without_host_devices = false
	I1007 11:26:29.080381   42947 command_runner.go:130] > # allowed_annotations = []
	I1007 11:26:29.080386   42947 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1007 11:26:29.080391   42947 command_runner.go:130] > # Where:
	I1007 11:26:29.080397   42947 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1007 11:26:29.080405   42947 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1007 11:26:29.080411   42947 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1007 11:26:29.080419   42947 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1007 11:26:29.080424   42947 command_runner.go:130] > #   in $PATH.
	I1007 11:26:29.080433   42947 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1007 11:26:29.080440   42947 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1007 11:26:29.080446   42947 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1007 11:26:29.080451   42947 command_runner.go:130] > #   state.
	I1007 11:26:29.080457   42947 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1007 11:26:29.080464   42947 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1007 11:26:29.080470   42947 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1007 11:26:29.080478   42947 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1007 11:26:29.080483   42947 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1007 11:26:29.080492   42947 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1007 11:26:29.080499   42947 command_runner.go:130] > #   The currently recognized values are:
	I1007 11:26:29.080508   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1007 11:26:29.080517   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1007 11:26:29.080525   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1007 11:26:29.080533   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1007 11:26:29.080542   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1007 11:26:29.080551   42947 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1007 11:26:29.080559   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1007 11:26:29.080567   42947 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1007 11:26:29.080574   42947 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1007 11:26:29.080581   42947 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1007 11:26:29.080585   42947 command_runner.go:130] > #   deprecated option "conmon".
	I1007 11:26:29.080594   42947 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1007 11:26:29.080600   42947 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1007 11:26:29.080608   42947 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1007 11:26:29.080615   42947 command_runner.go:130] > #   should be moved to the container's cgroup
	I1007 11:26:29.080621   42947 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1007 11:26:29.080627   42947 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1007 11:26:29.080633   42947 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1007 11:26:29.080640   42947 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1007 11:26:29.080643   42947 command_runner.go:130] > #
	I1007 11:26:29.080648   42947 command_runner.go:130] > # Using the seccomp notifier feature:
	I1007 11:26:29.080655   42947 command_runner.go:130] > #
	I1007 11:26:29.080661   42947 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1007 11:26:29.080669   42947 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1007 11:26:29.080675   42947 command_runner.go:130] > #
	I1007 11:26:29.080681   42947 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1007 11:26:29.080688   42947 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1007 11:26:29.080691   42947 command_runner.go:130] > #
	I1007 11:26:29.080699   42947 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1007 11:26:29.080704   42947 command_runner.go:130] > # feature.
	I1007 11:26:29.080707   42947 command_runner.go:130] > #
	I1007 11:26:29.080716   42947 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1007 11:26:29.080724   42947 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1007 11:26:29.080730   42947 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1007 11:26:29.080738   42947 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1007 11:26:29.080746   42947 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1007 11:26:29.080749   42947 command_runner.go:130] > #
	I1007 11:26:29.080755   42947 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1007 11:26:29.080763   42947 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1007 11:26:29.080768   42947 command_runner.go:130] > #
	I1007 11:26:29.080773   42947 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1007 11:26:29.080781   42947 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1007 11:26:29.080784   42947 command_runner.go:130] > #
	I1007 11:26:29.080789   42947 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1007 11:26:29.080797   42947 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1007 11:26:29.080800   42947 command_runner.go:130] > # limitation.
	I1007 11:26:29.080809   42947 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1007 11:26:29.080813   42947 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1007 11:26:29.080817   42947 command_runner.go:130] > runtime_type = "oci"
	I1007 11:26:29.080821   42947 command_runner.go:130] > runtime_root = "/run/runc"
	I1007 11:26:29.080825   42947 command_runner.go:130] > runtime_config_path = ""
	I1007 11:26:29.080829   42947 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1007 11:26:29.080835   42947 command_runner.go:130] > monitor_cgroup = "pod"
	I1007 11:26:29.080839   42947 command_runner.go:130] > monitor_exec_cgroup = ""
	I1007 11:26:29.080844   42947 command_runner.go:130] > monitor_env = [
	I1007 11:26:29.080849   42947 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1007 11:26:29.080855   42947 command_runner.go:130] > ]
	I1007 11:26:29.080859   42947 command_runner.go:130] > privileged_without_host_devices = false
	I1007 11:26:29.080865   42947 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1007 11:26:29.080878   42947 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1007 11:26:29.080885   42947 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1007 11:26:29.080892   42947 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1007 11:26:29.080904   42947 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1007 11:26:29.080912   42947 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1007 11:26:29.080921   42947 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1007 11:26:29.080930   42947 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1007 11:26:29.080938   42947 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1007 11:26:29.080944   42947 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1007 11:26:29.080950   42947 command_runner.go:130] > # Example:
	I1007 11:26:29.080955   42947 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1007 11:26:29.080962   42947 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1007 11:26:29.080967   42947 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1007 11:26:29.080973   42947 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1007 11:26:29.080977   42947 command_runner.go:130] > # cpuset = 0
	I1007 11:26:29.080981   42947 command_runner.go:130] > # cpushares = "0-1"
	I1007 11:26:29.080985   42947 command_runner.go:130] > # Where:
	I1007 11:26:29.080989   42947 command_runner.go:130] > # The workload name is workload-type.
	I1007 11:26:29.080998   42947 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1007 11:26:29.081004   42947 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1007 11:26:29.081011   42947 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1007 11:26:29.081019   42947 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1007 11:26:29.081026   42947 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1007 11:26:29.081033   42947 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1007 11:26:29.081039   42947 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1007 11:26:29.081046   42947 command_runner.go:130] > # Default value is set to true
	I1007 11:26:29.081050   42947 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1007 11:26:29.081058   42947 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1007 11:26:29.081062   42947 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1007 11:26:29.081069   42947 command_runner.go:130] > # Default value is set to 'false'
	I1007 11:26:29.081073   42947 command_runner.go:130] > # disable_hostport_mapping = false
	I1007 11:26:29.081080   42947 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1007 11:26:29.081085   42947 command_runner.go:130] > #
	I1007 11:26:29.081090   42947 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1007 11:26:29.081096   42947 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1007 11:26:29.081102   42947 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1007 11:26:29.081107   42947 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1007 11:26:29.081114   42947 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1007 11:26:29.081119   42947 command_runner.go:130] > [crio.image]
	I1007 11:26:29.081124   42947 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1007 11:26:29.081128   42947 command_runner.go:130] > # default_transport = "docker://"
	I1007 11:26:29.081134   42947 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1007 11:26:29.081140   42947 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1007 11:26:29.081143   42947 command_runner.go:130] > # global_auth_file = ""
	I1007 11:26:29.081148   42947 command_runner.go:130] > # The image used to instantiate infra containers.
	I1007 11:26:29.081152   42947 command_runner.go:130] > # This option supports live configuration reload.
	I1007 11:26:29.081157   42947 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1007 11:26:29.081163   42947 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1007 11:26:29.081168   42947 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1007 11:26:29.081173   42947 command_runner.go:130] > # This option supports live configuration reload.
	I1007 11:26:29.081176   42947 command_runner.go:130] > # pause_image_auth_file = ""
	I1007 11:26:29.081183   42947 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1007 11:26:29.081188   42947 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1007 11:26:29.081193   42947 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1007 11:26:29.081199   42947 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1007 11:26:29.081202   42947 command_runner.go:130] > # pause_command = "/pause"
	I1007 11:26:29.081208   42947 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1007 11:26:29.081213   42947 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1007 11:26:29.081219   42947 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1007 11:26:29.081226   42947 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1007 11:26:29.081232   42947 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1007 11:26:29.081237   42947 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1007 11:26:29.081241   42947 command_runner.go:130] > # pinned_images = [
	I1007 11:26:29.081244   42947 command_runner.go:130] > # ]
	I1007 11:26:29.081249   42947 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1007 11:26:29.081255   42947 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1007 11:26:29.081261   42947 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1007 11:26:29.081266   42947 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1007 11:26:29.081271   42947 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1007 11:26:29.081277   42947 command_runner.go:130] > # signature_policy = ""
	I1007 11:26:29.081285   42947 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1007 11:26:29.081293   42947 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1007 11:26:29.081301   42947 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1007 11:26:29.081310   42947 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1007 11:26:29.081318   42947 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1007 11:26:29.081324   42947 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1007 11:26:29.081330   42947 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1007 11:26:29.081338   42947 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1007 11:26:29.081344   42947 command_runner.go:130] > # changing them here.
	I1007 11:26:29.081349   42947 command_runner.go:130] > # insecure_registries = [
	I1007 11:26:29.081358   42947 command_runner.go:130] > # ]
	I1007 11:26:29.081366   42947 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1007 11:26:29.081374   42947 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1007 11:26:29.081378   42947 command_runner.go:130] > # image_volumes = "mkdir"
	I1007 11:26:29.081385   42947 command_runner.go:130] > # Temporary directory to use for storing big files
	I1007 11:26:29.081389   42947 command_runner.go:130] > # big_files_temporary_dir = ""
	I1007 11:26:29.081398   42947 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1007 11:26:29.081405   42947 command_runner.go:130] > # CNI plugins.
	I1007 11:26:29.081408   42947 command_runner.go:130] > [crio.network]
	I1007 11:26:29.081416   42947 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1007 11:26:29.081423   42947 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1007 11:26:29.081427   42947 command_runner.go:130] > # cni_default_network = ""
	I1007 11:26:29.081435   42947 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1007 11:26:29.081442   42947 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1007 11:26:29.081450   42947 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1007 11:26:29.081455   42947 command_runner.go:130] > # plugin_dirs = [
	I1007 11:26:29.081459   42947 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1007 11:26:29.081464   42947 command_runner.go:130] > # ]
	I1007 11:26:29.081470   42947 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1007 11:26:29.081475   42947 command_runner.go:130] > [crio.metrics]
	I1007 11:26:29.081480   42947 command_runner.go:130] > # Globally enable or disable metrics support.
	I1007 11:26:29.081485   42947 command_runner.go:130] > enable_metrics = true
	I1007 11:26:29.081491   42947 command_runner.go:130] > # Specify enabled metrics collectors.
	I1007 11:26:29.081498   42947 command_runner.go:130] > # Per default all metrics are enabled.
	I1007 11:26:29.081509   42947 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1007 11:26:29.081520   42947 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1007 11:26:29.081528   42947 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1007 11:26:29.081533   42947 command_runner.go:130] > # metrics_collectors = [
	I1007 11:26:29.081536   42947 command_runner.go:130] > # 	"operations",
	I1007 11:26:29.081543   42947 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1007 11:26:29.081547   42947 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1007 11:26:29.081553   42947 command_runner.go:130] > # 	"operations_errors",
	I1007 11:26:29.081557   42947 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1007 11:26:29.081566   42947 command_runner.go:130] > # 	"image_pulls_by_name",
	I1007 11:26:29.081573   42947 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1007 11:26:29.081581   42947 command_runner.go:130] > # 	"image_pulls_failures",
	I1007 11:26:29.081587   42947 command_runner.go:130] > # 	"image_pulls_successes",
	I1007 11:26:29.081592   42947 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1007 11:26:29.081597   42947 command_runner.go:130] > # 	"image_layer_reuse",
	I1007 11:26:29.081602   42947 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1007 11:26:29.081608   42947 command_runner.go:130] > # 	"containers_oom_total",
	I1007 11:26:29.081612   42947 command_runner.go:130] > # 	"containers_oom",
	I1007 11:26:29.081618   42947 command_runner.go:130] > # 	"processes_defunct",
	I1007 11:26:29.081622   42947 command_runner.go:130] > # 	"operations_total",
	I1007 11:26:29.081628   42947 command_runner.go:130] > # 	"operations_latency_seconds",
	I1007 11:26:29.081632   42947 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1007 11:26:29.081638   42947 command_runner.go:130] > # 	"operations_errors_total",
	I1007 11:26:29.081642   42947 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1007 11:26:29.081649   42947 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1007 11:26:29.081653   42947 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1007 11:26:29.081658   42947 command_runner.go:130] > # 	"image_pulls_success_total",
	I1007 11:26:29.081665   42947 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1007 11:26:29.081672   42947 command_runner.go:130] > # 	"containers_oom_count_total",
	I1007 11:26:29.081679   42947 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1007 11:26:29.081683   42947 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1007 11:26:29.081688   42947 command_runner.go:130] > # ]
	I1007 11:26:29.081693   42947 command_runner.go:130] > # The port on which the metrics server will listen.
	I1007 11:26:29.081699   42947 command_runner.go:130] > # metrics_port = 9090
	I1007 11:26:29.081704   42947 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1007 11:26:29.081710   42947 command_runner.go:130] > # metrics_socket = ""
	I1007 11:26:29.081715   42947 command_runner.go:130] > # The certificate for the secure metrics server.
	I1007 11:26:29.081725   42947 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1007 11:26:29.081734   42947 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1007 11:26:29.081740   42947 command_runner.go:130] > # certificate on any modification event.
	I1007 11:26:29.081744   42947 command_runner.go:130] > # metrics_cert = ""
	I1007 11:26:29.081751   42947 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1007 11:26:29.081756   42947 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1007 11:26:29.081762   42947 command_runner.go:130] > # metrics_key = ""
	I1007 11:26:29.081768   42947 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1007 11:26:29.081773   42947 command_runner.go:130] > [crio.tracing]
	I1007 11:26:29.081778   42947 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1007 11:26:29.081784   42947 command_runner.go:130] > # enable_tracing = false
	I1007 11:26:29.081789   42947 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1007 11:26:29.081794   42947 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1007 11:26:29.081803   42947 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1007 11:26:29.081808   42947 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1007 11:26:29.081812   42947 command_runner.go:130] > # CRI-O NRI configuration.
	I1007 11:26:29.081820   42947 command_runner.go:130] > [crio.nri]
	I1007 11:26:29.081824   42947 command_runner.go:130] > # Globally enable or disable NRI.
	I1007 11:26:29.081828   42947 command_runner.go:130] > # enable_nri = false
	I1007 11:26:29.081835   42947 command_runner.go:130] > # NRI socket to listen on.
	I1007 11:26:29.081842   42947 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1007 11:26:29.081846   42947 command_runner.go:130] > # NRI plugin directory to use.
	I1007 11:26:29.081853   42947 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1007 11:26:29.081860   42947 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1007 11:26:29.081866   42947 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1007 11:26:29.081871   42947 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1007 11:26:29.081875   42947 command_runner.go:130] > # nri_disable_connections = false
	I1007 11:26:29.081883   42947 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1007 11:26:29.081887   42947 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1007 11:26:29.081897   42947 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1007 11:26:29.081901   42947 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1007 11:26:29.081907   42947 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1007 11:26:29.081911   42947 command_runner.go:130] > [crio.stats]
	I1007 11:26:29.081917   42947 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1007 11:26:29.081924   42947 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1007 11:26:29.081928   42947 command_runner.go:130] > # stats_collection_period = 0
	I1007 11:26:29.082047   42947 cni.go:84] Creating CNI manager for ""
	I1007 11:26:29.082061   42947 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 11:26:29.082070   42947 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:26:29.082088   42947 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-873106 NodeName:multinode-873106 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 11:26:29.082218   42947 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-873106"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:26:29.082278   42947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:26:29.092898   42947 command_runner.go:130] > kubeadm
	I1007 11:26:29.092921   42947 command_runner.go:130] > kubectl
	I1007 11:26:29.092925   42947 command_runner.go:130] > kubelet
	I1007 11:26:29.092942   42947 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:26:29.092990   42947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 11:26:29.103017   42947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1007 11:26:29.121792   42947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:26:29.139315   42947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1007 11:26:29.158566   42947 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I1007 11:26:29.162577   42947 command_runner.go:130] > 192.168.39.51	control-plane.minikube.internal
	I1007 11:26:29.162655   42947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:26:29.318385   42947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:26:29.347329   42947 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106 for IP: 192.168.39.51
	I1007 11:26:29.347348   42947 certs.go:194] generating shared ca certs ...
	I1007 11:26:29.347367   42947 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:26:29.347533   42947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 11:26:29.347573   42947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 11:26:29.347580   42947 certs.go:256] generating profile certs ...
	I1007 11:26:29.347649   42947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/client.key
	I1007 11:26:29.347915   42947 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/apiserver.key.8b7bf9e8
	I1007 11:26:29.347965   42947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/proxy-client.key
	I1007 11:26:29.347978   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 11:26:29.348025   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 11:26:29.348041   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 11:26:29.348054   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 11:26:29.348066   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 11:26:29.348079   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 11:26:29.348091   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 11:26:29.348102   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 11:26:29.348157   42947 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 11:26:29.348185   42947 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 11:26:29.348194   42947 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 11:26:29.348215   42947 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 11:26:29.348251   42947 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:26:29.348277   42947 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 11:26:29.348312   42947 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 11:26:29.348338   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem -> /usr/share/ca-certificates/11096.pem
	I1007 11:26:29.348351   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> /usr/share/ca-certificates/110962.pem
	I1007 11:26:29.348363   42947 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:26:29.349002   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:26:29.395362   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 11:26:29.442350   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:26:29.477194   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 11:26:29.504304   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 11:26:29.546630   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 11:26:29.584892   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:26:29.635489   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/multinode-873106/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 11:26:29.668380   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 11:26:29.696465   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 11:26:29.721690   42947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:26:29.746589   42947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:26:29.764874   42947 ssh_runner.go:195] Run: openssl version
	I1007 11:26:29.770848   42947 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1007 11:26:29.770926   42947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:26:29.782036   42947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:26:29.786872   42947 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:26:29.787114   42947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:26:29.787158   42947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:26:29.793234   42947 command_runner.go:130] > b5213941
	I1007 11:26:29.793296   42947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:26:29.803420   42947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 11:26:29.814560   42947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 11:26:29.819098   42947 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 11:26:29.819126   42947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 11:26:29.819160   42947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 11:26:29.825502   42947 command_runner.go:130] > 51391683
	I1007 11:26:29.825580   42947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 11:26:29.835035   42947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 11:26:29.846081   42947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 11:26:29.850480   42947 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 11:26:29.850582   42947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 11:26:29.850633   42947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 11:26:29.856351   42947 command_runner.go:130] > 3ec20f2e
	I1007 11:26:29.856518   42947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 11:26:29.866851   42947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:26:29.871475   42947 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:26:29.871502   42947 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1007 11:26:29.871510   42947 command_runner.go:130] > Device: 253,1	Inode: 7337000     Links: 1
	I1007 11:26:29.871520   42947 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1007 11:26:29.871529   42947 command_runner.go:130] > Access: 2024-10-07 11:19:47.284088729 +0000
	I1007 11:26:29.871536   42947 command_runner.go:130] > Modify: 2024-10-07 11:19:47.284088729 +0000
	I1007 11:26:29.871543   42947 command_runner.go:130] > Change: 2024-10-07 11:19:47.284088729 +0000
	I1007 11:26:29.871550   42947 command_runner.go:130] >  Birth: 2024-10-07 11:19:47.284088729 +0000
	I1007 11:26:29.871623   42947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 11:26:29.877324   42947 command_runner.go:130] > Certificate will not expire
	I1007 11:26:29.877509   42947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 11:26:29.883393   42947 command_runner.go:130] > Certificate will not expire
	I1007 11:26:29.883507   42947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 11:26:29.889508   42947 command_runner.go:130] > Certificate will not expire
	I1007 11:26:29.889601   42947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 11:26:29.895167   42947 command_runner.go:130] > Certificate will not expire
	I1007 11:26:29.895251   42947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 11:26:29.900737   42947 command_runner.go:130] > Certificate will not expire
	I1007 11:26:29.900931   42947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 11:26:29.906338   42947 command_runner.go:130] > Certificate will not expire
	I1007 11:26:29.906496   42947 kubeadm.go:392] StartCluster: {Name:multinode-873106 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:multinode-873106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:26:29.906607   42947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 11:26:29.906668   42947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:26:29.943058   42947 command_runner.go:130] > fb1d8421da42a7887167bb3dfcee87d8b1927bd0cfdb27826f456b260650b7ae
	I1007 11:26:29.943098   42947 command_runner.go:130] > f3590c54e9965d4d724d2527908e35728496a93977b13476d79ab1d7e9448a3a
	I1007 11:26:29.943110   42947 command_runner.go:130] > c956258dc13d1fb534b779a8be5ed514ed82e53da7fbf9d9938c83f09db0db71
	I1007 11:26:29.943159   42947 command_runner.go:130] > 7a06720aace13ce689f397283c21f1d09ee33ff0b4580d1666878b9d29a7008b
	I1007 11:26:29.943171   42947 command_runner.go:130] > da82a1dafba5cfc5ce03d13bc0773af7458c0d722741bc0319262ae385cd7d2d
	I1007 11:26:29.943187   42947 command_runner.go:130] > edd0197acb1729fe1537ea8707c43578e9acf466574a81ee3e30c4417b15505d
	I1007 11:26:29.943198   42947 command_runner.go:130] > d8f06daea653405132f3538370db34a96f36c20dfe7594b52f1018c70fa55a84
	I1007 11:26:29.943321   42947 command_runner.go:130] > e93e7c28ad05a6b5f7458edc2f807f69cc00f61c2c9b2185e1dc46239ec54525
	I1007 11:26:29.943358   42947 command_runner.go:130] > 03a0eaccb2b60a7e10ef407ba77040a84c4210c709052030c365d7064fe3995f
	I1007 11:26:29.944722   42947 cri.go:89] found id: "fb1d8421da42a7887167bb3dfcee87d8b1927bd0cfdb27826f456b260650b7ae"
	I1007 11:26:29.944737   42947 cri.go:89] found id: "f3590c54e9965d4d724d2527908e35728496a93977b13476d79ab1d7e9448a3a"
	I1007 11:26:29.944740   42947 cri.go:89] found id: "c956258dc13d1fb534b779a8be5ed514ed82e53da7fbf9d9938c83f09db0db71"
	I1007 11:26:29.944744   42947 cri.go:89] found id: "7a06720aace13ce689f397283c21f1d09ee33ff0b4580d1666878b9d29a7008b"
	I1007 11:26:29.944747   42947 cri.go:89] found id: "da82a1dafba5cfc5ce03d13bc0773af7458c0d722741bc0319262ae385cd7d2d"
	I1007 11:26:29.944750   42947 cri.go:89] found id: "edd0197acb1729fe1537ea8707c43578e9acf466574a81ee3e30c4417b15505d"
	I1007 11:26:29.944752   42947 cri.go:89] found id: "d8f06daea653405132f3538370db34a96f36c20dfe7594b52f1018c70fa55a84"
	I1007 11:26:29.944754   42947 cri.go:89] found id: "e93e7c28ad05a6b5f7458edc2f807f69cc00f61c2c9b2185e1dc46239ec54525"
	I1007 11:26:29.944757   42947 cri.go:89] found id: "03a0eaccb2b60a7e10ef407ba77040a84c4210c709052030c365d7064fe3995f"
	I1007 11:26:29.944763   42947 cri.go:89] found id: ""
	I1007 11:26:29.944807   42947 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-873106 -n multinode-873106
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-873106 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.33s)

                                                
                                    
x
+
TestPreload (269.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-275411 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1007 11:35:08.250497   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-275411 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m6.409591442s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-275411 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-275411 image pull gcr.io/k8s-minikube/busybox: (3.294682266s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-275411
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-275411: exit status 82 (2m0.475306443s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-275411"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-275411 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-10-07 11:38:47.402255341 +0000 UTC m=+4629.585041360
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-275411 -n test-preload-275411
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-275411 -n test-preload-275411: exit status 3 (18.565492517s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 11:39:05.964323   47804 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host
	E1007 11:39:05.964346   47804 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.43:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-275411" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-275411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-275411
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-275411: (1.151717232s)
--- FAIL: TestPreload (269.90s)

                                                
                                    
x
+
TestKubernetesUpgrade (423.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-852078 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-852078 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m31.580088745s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-852078] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-852078" primary control-plane node in "kubernetes-upgrade-852078" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 11:40:58.935905   48885 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:40:58.936047   48885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:40:58.936058   48885 out.go:358] Setting ErrFile to fd 2...
	I1007 11:40:58.936064   48885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:40:58.936324   48885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 11:40:58.936808   48885 out.go:352] Setting JSON to false
	I1007 11:40:58.937551   48885 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4953,"bootTime":1728296306,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:40:58.937607   48885 start.go:139] virtualization: kvm guest
	I1007 11:40:58.939334   48885 out.go:177] * [kubernetes-upgrade-852078] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:40:58.940859   48885 notify.go:220] Checking for updates...
	I1007 11:40:58.940875   48885 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 11:40:58.943033   48885 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:40:58.945613   48885 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 11:40:58.949166   48885 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:40:58.951335   48885 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:40:58.953646   48885 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:40:58.955037   48885 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:40:58.989091   48885 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 11:40:58.990343   48885 start.go:297] selected driver: kvm2
	I1007 11:40:58.990357   48885 start.go:901] validating driver "kvm2" against <nil>
	I1007 11:40:58.990372   48885 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:40:58.991092   48885 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:40:58.991178   48885 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 11:40:59.007662   48885 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 11:40:59.007731   48885 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:40:59.008085   48885 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 11:40:59.008117   48885 cni.go:84] Creating CNI manager for ""
	I1007 11:40:59.008175   48885 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:40:59.008182   48885 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 11:40:59.008236   48885 start.go:340] cluster config:
	{Name:kubernetes-upgrade-852078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-852078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:40:59.008352   48885 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:40:59.011118   48885 out.go:177] * Starting "kubernetes-upgrade-852078" primary control-plane node in "kubernetes-upgrade-852078" cluster
	I1007 11:40:59.012382   48885 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 11:40:59.012425   48885 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1007 11:40:59.012448   48885 cache.go:56] Caching tarball of preloaded images
	I1007 11:40:59.012576   48885 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 11:40:59.012589   48885 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1007 11:40:59.012877   48885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/config.json ...
	I1007 11:40:59.012901   48885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/config.json: {Name:mk73771e699002f313319f557992821b15bf3270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:40:59.013042   48885 start.go:360] acquireMachinesLock for kubernetes-upgrade-852078: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 11:40:59.013086   48885 start.go:364] duration metric: took 22.871µs to acquireMachinesLock for "kubernetes-upgrade-852078"
	I1007 11:40:59.013109   48885 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-852078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-852078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:40:59.013174   48885 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 11:40:59.015656   48885 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 11:40:59.015814   48885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:40:59.015856   48885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:40:59.031723   48885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40901
	I1007 11:40:59.032190   48885 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:40:59.032755   48885 main.go:141] libmachine: Using API Version  1
	I1007 11:40:59.032779   48885 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:40:59.033170   48885 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:40:59.033362   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetMachineName
	I1007 11:40:59.033517   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .DriverName
	I1007 11:40:59.033729   48885 start.go:159] libmachine.API.Create for "kubernetes-upgrade-852078" (driver="kvm2")
	I1007 11:40:59.033760   48885 client.go:168] LocalClient.Create starting
	I1007 11:40:59.033793   48885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 11:40:59.033833   48885 main.go:141] libmachine: Decoding PEM data...
	I1007 11:40:59.033853   48885 main.go:141] libmachine: Parsing certificate...
	I1007 11:40:59.033923   48885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 11:40:59.033956   48885 main.go:141] libmachine: Decoding PEM data...
	I1007 11:40:59.033974   48885 main.go:141] libmachine: Parsing certificate...
	I1007 11:40:59.033999   48885 main.go:141] libmachine: Running pre-create checks...
	I1007 11:40:59.034012   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .PreCreateCheck
	I1007 11:40:59.034406   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetConfigRaw
	I1007 11:40:59.034809   48885 main.go:141] libmachine: Creating machine...
	I1007 11:40:59.034825   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .Create
	I1007 11:40:59.034976   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Creating KVM machine...
	I1007 11:40:59.036274   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found existing default KVM network
	I1007 11:40:59.037118   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:40:59.036961   48942 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a30}
	I1007 11:40:59.037154   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | created network xml: 
	I1007 11:40:59.037168   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | <network>
	I1007 11:40:59.037179   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG |   <name>mk-kubernetes-upgrade-852078</name>
	I1007 11:40:59.037189   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG |   <dns enable='no'/>
	I1007 11:40:59.037199   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG |   
	I1007 11:40:59.037210   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 11:40:59.037220   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG |     <dhcp>
	I1007 11:40:59.037236   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 11:40:59.037251   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG |     </dhcp>
	I1007 11:40:59.037265   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG |   </ip>
	I1007 11:40:59.037285   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG |   
	I1007 11:40:59.037301   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | </network>
	I1007 11:40:59.037311   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | 
	I1007 11:40:59.042233   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | trying to create private KVM network mk-kubernetes-upgrade-852078 192.168.39.0/24...
	I1007 11:40:59.112801   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | private KVM network mk-kubernetes-upgrade-852078 192.168.39.0/24 created
	I1007 11:40:59.112828   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:40:59.112771   48942 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:40:59.112849   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078 ...
	I1007 11:40:59.112860   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 11:40:59.112977   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 11:40:59.356494   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:40:59.356352   48942 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078/id_rsa...
	I1007 11:40:59.551293   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:40:59.551149   48942 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078/kubernetes-upgrade-852078.rawdisk...
	I1007 11:40:59.551329   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Writing magic tar header
	I1007 11:40:59.551347   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Writing SSH key tar header
	I1007 11:40:59.551355   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:40:59.551280   48942 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078 ...
	I1007 11:40:59.551404   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078
	I1007 11:40:59.551441   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078 (perms=drwx------)
	I1007 11:40:59.551476   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 11:40:59.551487   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 11:40:59.551498   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:40:59.551506   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 11:40:59.551517   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 11:40:59.551547   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Checking permissions on dir: /home/jenkins
	I1007 11:40:59.551564   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 11:40:59.551578   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 11:40:59.551588   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 11:40:59.551603   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 11:40:59.551622   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Creating domain...
	I1007 11:40:59.551634   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Checking permissions on dir: /home
	I1007 11:40:59.551645   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Skipping /home - not owner
	I1007 11:40:59.552885   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) define libvirt domain using xml: 
	I1007 11:40:59.552905   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) <domain type='kvm'>
	I1007 11:40:59.552914   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)   <name>kubernetes-upgrade-852078</name>
	I1007 11:40:59.552922   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)   <memory unit='MiB'>2200</memory>
	I1007 11:40:59.552932   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)   <vcpu>2</vcpu>
	I1007 11:40:59.552940   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)   <features>
	I1007 11:40:59.552947   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <acpi/>
	I1007 11:40:59.552952   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <apic/>
	I1007 11:40:59.552960   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <pae/>
	I1007 11:40:59.552969   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     
	I1007 11:40:59.552976   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)   </features>
	I1007 11:40:59.552983   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)   <cpu mode='host-passthrough'>
	I1007 11:40:59.552992   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)   
	I1007 11:40:59.553004   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)   </cpu>
	I1007 11:40:59.553015   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)   <os>
	I1007 11:40:59.553025   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <type>hvm</type>
	I1007 11:40:59.553032   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <boot dev='cdrom'/>
	I1007 11:40:59.553036   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <boot dev='hd'/>
	I1007 11:40:59.553044   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <bootmenu enable='no'/>
	I1007 11:40:59.553048   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)   </os>
	I1007 11:40:59.553055   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)   <devices>
	I1007 11:40:59.553062   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <disk type='file' device='cdrom'>
	I1007 11:40:59.553079   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078/boot2docker.iso'/>
	I1007 11:40:59.553093   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <target dev='hdc' bus='scsi'/>
	I1007 11:40:59.553105   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <readonly/>
	I1007 11:40:59.553114   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     </disk>
	I1007 11:40:59.553125   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <disk type='file' device='disk'>
	I1007 11:40:59.553133   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 11:40:59.553146   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078/kubernetes-upgrade-852078.rawdisk'/>
	I1007 11:40:59.553159   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <target dev='hda' bus='virtio'/>
	I1007 11:40:59.553180   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     </disk>
	I1007 11:40:59.553193   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <interface type='network'>
	I1007 11:40:59.553200   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <source network='mk-kubernetes-upgrade-852078'/>
	I1007 11:40:59.553207   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <model type='virtio'/>
	I1007 11:40:59.553212   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     </interface>
	I1007 11:40:59.553219   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <interface type='network'>
	I1007 11:40:59.553224   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <source network='default'/>
	I1007 11:40:59.553231   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <model type='virtio'/>
	I1007 11:40:59.553236   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     </interface>
	I1007 11:40:59.553243   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <serial type='pty'>
	I1007 11:40:59.553248   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <target port='0'/>
	I1007 11:40:59.553255   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     </serial>
	I1007 11:40:59.553261   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <console type='pty'>
	I1007 11:40:59.553271   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <target type='serial' port='0'/>
	I1007 11:40:59.553278   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     </console>
	I1007 11:40:59.553289   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     <rng model='virtio'>
	I1007 11:40:59.553295   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)       <backend model='random'>/dev/random</backend>
	I1007 11:40:59.553304   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     </rng>
	I1007 11:40:59.553309   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     
	I1007 11:40:59.553315   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)     
	I1007 11:40:59.553321   48885 main.go:141] libmachine: (kubernetes-upgrade-852078)   </devices>
	I1007 11:40:59.553327   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) </domain>
	I1007 11:40:59.553334   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) 
	I1007 11:40:59.557833   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:2c:29:99 in network default
	I1007 11:40:59.558473   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Ensuring networks are active...
	I1007 11:40:59.558502   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:40:59.559060   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Ensuring network default is active
	I1007 11:40:59.559319   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Ensuring network mk-kubernetes-upgrade-852078 is active
	I1007 11:40:59.559726   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Getting domain xml...
	I1007 11:40:59.560335   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Creating domain...
	I1007 11:41:00.838400   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Waiting to get IP...
	I1007 11:41:00.839161   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:00.839490   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:00.839521   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:00.839477   48942 retry.go:31] will retry after 240.715999ms: waiting for machine to come up
	I1007 11:41:01.082146   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:01.082571   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:01.082595   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:01.082517   48942 retry.go:31] will retry after 365.670023ms: waiting for machine to come up
	I1007 11:41:01.450108   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:01.450593   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:01.450617   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:01.450549   48942 retry.go:31] will retry after 463.13315ms: waiting for machine to come up
	I1007 11:41:01.915256   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:01.915670   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:01.915698   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:01.915613   48942 retry.go:31] will retry after 433.135514ms: waiting for machine to come up
	I1007 11:41:02.350269   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:02.350807   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:02.350862   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:02.350780   48942 retry.go:31] will retry after 745.703933ms: waiting for machine to come up
	I1007 11:41:03.097655   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:03.098066   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:03.098088   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:03.098017   48942 retry.go:31] will retry after 908.490753ms: waiting for machine to come up
	I1007 11:41:04.007782   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:04.008277   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:04.008324   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:04.008210   48942 retry.go:31] will retry after 828.330796ms: waiting for machine to come up
	I1007 11:41:04.838197   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:04.838698   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:04.838726   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:04.838650   48942 retry.go:31] will retry after 1.333165154s: waiting for machine to come up
	I1007 11:41:06.173124   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:06.173578   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:06.173612   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:06.173533   48942 retry.go:31] will retry after 1.431451514s: waiting for machine to come up
	I1007 11:41:07.606923   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:07.607498   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:07.607523   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:07.607460   48942 retry.go:31] will retry after 1.733899551s: waiting for machine to come up
	I1007 11:41:09.343379   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:09.343767   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:09.343796   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:09.343751   48942 retry.go:31] will retry after 1.832646481s: waiting for machine to come up
	I1007 11:41:11.178603   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:11.179052   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:11.179082   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:11.179017   48942 retry.go:31] will retry after 2.214739077s: waiting for machine to come up
	I1007 11:41:13.396214   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:13.396772   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:13.396825   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:13.396750   48942 retry.go:31] will retry after 4.091156221s: waiting for machine to come up
	I1007 11:41:17.488982   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:17.489411   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find current IP address of domain kubernetes-upgrade-852078 in network mk-kubernetes-upgrade-852078
	I1007 11:41:17.489434   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | I1007 11:41:17.489370   48942 retry.go:31] will retry after 4.38580053s: waiting for machine to come up
	I1007 11:41:21.877721   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:21.878265   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has current primary IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:21.878294   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Found IP for machine: 192.168.39.196
	I1007 11:41:21.878308   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Reserving static IP address...
	I1007 11:41:21.878877   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-852078", mac: "52:54:00:29:c0:4b", ip: "192.168.39.196"} in network mk-kubernetes-upgrade-852078
	I1007 11:41:21.957874   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Getting to WaitForSSH function...
	I1007 11:41:21.957899   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Reserved static IP address: 192.168.39.196
	I1007 11:41:21.957912   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Waiting for SSH to be available...
	I1007 11:41:21.960685   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:21.961089   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:minikube Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:21.961125   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:21.961238   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Using SSH client type: external
	I1007 11:41:21.961260   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Using SSH private key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078/id_rsa (-rw-------)
	I1007 11:41:21.961298   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 11:41:21.961310   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | About to run SSH command:
	I1007 11:41:21.961319   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | exit 0
	I1007 11:41:22.088836   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | SSH cmd err, output: <nil>: 
	I1007 11:41:22.089145   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) KVM machine creation complete!
	I1007 11:41:22.089520   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetConfigRaw
	I1007 11:41:22.090306   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .DriverName
	I1007 11:41:22.090558   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .DriverName
	I1007 11:41:22.090752   48885 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 11:41:22.090770   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetState
	I1007 11:41:22.092236   48885 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 11:41:22.092283   48885 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 11:41:22.092291   48885 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 11:41:22.092312   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:41:22.094744   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.095083   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:22.095121   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.095198   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHPort
	I1007 11:41:22.095382   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:22.095523   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:22.095682   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHUsername
	I1007 11:41:22.095820   48885 main.go:141] libmachine: Using SSH client type: native
	I1007 11:41:22.096077   48885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1007 11:41:22.096093   48885 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 11:41:22.195339   48885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:41:22.195360   48885 main.go:141] libmachine: Detecting the provisioner...
	I1007 11:41:22.195368   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:41:22.198142   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.198466   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:22.198499   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.198613   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHPort
	I1007 11:41:22.198820   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:22.198993   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:22.199109   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHUsername
	I1007 11:41:22.199257   48885 main.go:141] libmachine: Using SSH client type: native
	I1007 11:41:22.199465   48885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1007 11:41:22.199476   48885 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 11:41:22.300946   48885 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 11:41:22.301045   48885 main.go:141] libmachine: found compatible host: buildroot
	I1007 11:41:22.301056   48885 main.go:141] libmachine: Provisioning with buildroot...
	I1007 11:41:22.301064   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetMachineName
	I1007 11:41:22.301324   48885 buildroot.go:166] provisioning hostname "kubernetes-upgrade-852078"
	I1007 11:41:22.301359   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetMachineName
	I1007 11:41:22.301553   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:41:22.303968   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.304311   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:22.304338   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.304488   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHPort
	I1007 11:41:22.304684   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:22.304846   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:22.304991   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHUsername
	I1007 11:41:22.305164   48885 main.go:141] libmachine: Using SSH client type: native
	I1007 11:41:22.305397   48885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1007 11:41:22.305417   48885 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-852078 && echo "kubernetes-upgrade-852078" | sudo tee /etc/hostname
	I1007 11:41:22.424594   48885 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-852078
	
	I1007 11:41:22.424624   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:41:22.427433   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.427811   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:22.427857   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.428052   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHPort
	I1007 11:41:22.428243   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:22.428512   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:22.428666   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHUsername
	I1007 11:41:22.428833   48885 main.go:141] libmachine: Using SSH client type: native
	I1007 11:41:22.428992   48885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1007 11:41:22.429006   48885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-852078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-852078/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-852078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:41:22.542557   48885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:41:22.542585   48885 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 11:41:22.542624   48885 buildroot.go:174] setting up certificates
	I1007 11:41:22.542641   48885 provision.go:84] configureAuth start
	I1007 11:41:22.542654   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetMachineName
	I1007 11:41:22.542905   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetIP
	I1007 11:41:22.545606   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.545957   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:22.545980   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.546150   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:41:22.548277   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.548619   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:22.548655   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.548765   48885 provision.go:143] copyHostCerts
	I1007 11:41:22.548824   48885 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 11:41:22.548847   48885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 11:41:22.548921   48885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 11:41:22.549038   48885 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 11:41:22.549050   48885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 11:41:22.549089   48885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 11:41:22.549161   48885 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 11:41:22.549171   48885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 11:41:22.549203   48885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 11:41:22.549274   48885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-852078 san=[127.0.0.1 192.168.39.196 kubernetes-upgrade-852078 localhost minikube]
	I1007 11:41:22.756704   48885 provision.go:177] copyRemoteCerts
	I1007 11:41:22.756788   48885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:41:22.756832   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:41:22.759730   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.760124   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:22.760147   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.760357   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHPort
	I1007 11:41:22.760546   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:22.760706   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHUsername
	I1007 11:41:22.760844   48885 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078/id_rsa Username:docker}
	I1007 11:41:22.842665   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 11:41:22.869328   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1007 11:41:22.897876   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 11:41:22.923009   48885 provision.go:87] duration metric: took 380.352627ms to configureAuth
	I1007 11:41:22.923042   48885 buildroot.go:189] setting minikube options for container-runtime
	I1007 11:41:22.923209   48885 config.go:182] Loaded profile config "kubernetes-upgrade-852078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1007 11:41:22.923290   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:41:22.926428   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.926790   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:22.926823   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:22.926948   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHPort
	I1007 11:41:22.927166   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:22.927336   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:22.927525   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHUsername
	I1007 11:41:22.927772   48885 main.go:141] libmachine: Using SSH client type: native
	I1007 11:41:22.927958   48885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1007 11:41:22.927975   48885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:41:23.151651   48885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:41:23.151687   48885 main.go:141] libmachine: Checking connection to Docker...
	I1007 11:41:23.151698   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetURL
	I1007 11:41:23.152876   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Using libvirt version 6000000
	I1007 11:41:23.154894   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.155203   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:23.155224   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.155396   48885 main.go:141] libmachine: Docker is up and running!
	I1007 11:41:23.155410   48885 main.go:141] libmachine: Reticulating splines...
	I1007 11:41:23.155416   48885 client.go:171] duration metric: took 24.121649207s to LocalClient.Create
	I1007 11:41:23.155436   48885 start.go:167] duration metric: took 24.121716141s to libmachine.API.Create "kubernetes-upgrade-852078"
	I1007 11:41:23.155446   48885 start.go:293] postStartSetup for "kubernetes-upgrade-852078" (driver="kvm2")
	I1007 11:41:23.155456   48885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:41:23.155482   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .DriverName
	I1007 11:41:23.155687   48885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:41:23.155716   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:41:23.157781   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.158109   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:23.158140   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.158217   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHPort
	I1007 11:41:23.158364   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:23.158550   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHUsername
	I1007 11:41:23.158729   48885 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078/id_rsa Username:docker}
	I1007 11:41:23.240737   48885 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:41:23.245466   48885 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 11:41:23.245493   48885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 11:41:23.245556   48885 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 11:41:23.245634   48885 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 11:41:23.245737   48885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 11:41:23.257484   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 11:41:23.283532   48885 start.go:296] duration metric: took 128.070996ms for postStartSetup
	I1007 11:41:23.283592   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetConfigRaw
	I1007 11:41:23.284225   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetIP
	I1007 11:41:23.286942   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.287321   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:23.287350   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.287571   48885 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/config.json ...
	I1007 11:41:23.287783   48885 start.go:128] duration metric: took 24.274599097s to createHost
	I1007 11:41:23.287812   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:41:23.290324   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.290667   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:23.290706   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.290926   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHPort
	I1007 11:41:23.291073   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:23.291247   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:23.291375   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHUsername
	I1007 11:41:23.291508   48885 main.go:141] libmachine: Using SSH client type: native
	I1007 11:41:23.291686   48885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I1007 11:41:23.291703   48885 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 11:41:23.393653   48885 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728301283.349926131
	
	I1007 11:41:23.393678   48885 fix.go:216] guest clock: 1728301283.349926131
	I1007 11:41:23.393688   48885 fix.go:229] Guest: 2024-10-07 11:41:23.349926131 +0000 UTC Remote: 2024-10-07 11:41:23.287796599 +0000 UTC m=+24.400786996 (delta=62.129532ms)
	I1007 11:41:23.393724   48885 fix.go:200] guest clock delta is within tolerance: 62.129532ms
	I1007 11:41:23.393730   48885 start.go:83] releasing machines lock for "kubernetes-upgrade-852078", held for 24.380633546s
	I1007 11:41:23.393754   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .DriverName
	I1007 11:41:23.393996   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetIP
	I1007 11:41:23.396927   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.397239   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:23.397268   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.397486   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .DriverName
	I1007 11:41:23.398027   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .DriverName
	I1007 11:41:23.398305   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .DriverName
	I1007 11:41:23.398429   48885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:41:23.398470   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:41:23.398530   48885 ssh_runner.go:195] Run: cat /version.json
	I1007 11:41:23.398552   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:41:23.401350   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.401511   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.401739   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:23.401764   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.401913   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHPort
	I1007 11:41:23.401999   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:23.402022   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:23.402081   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:23.402185   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHPort
	I1007 11:41:23.402258   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHUsername
	I1007 11:41:23.402350   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:41:23.402408   48885 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078/id_rsa Username:docker}
	I1007 11:41:23.402462   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHUsername
	I1007 11:41:23.402559   48885 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078/id_rsa Username:docker}
	I1007 11:41:23.486869   48885 ssh_runner.go:195] Run: systemctl --version
	I1007 11:41:23.512110   48885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:41:23.679893   48885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 11:41:23.686441   48885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 11:41:23.686527   48885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:41:23.704833   48885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 11:41:23.704859   48885 start.go:495] detecting cgroup driver to use...
	I1007 11:41:23.704943   48885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:41:23.722427   48885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:41:23.736827   48885 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:41:23.736903   48885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:41:23.751294   48885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:41:23.765954   48885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:41:23.899730   48885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:41:24.048033   48885 docker.go:233] disabling docker service ...
	I1007 11:41:24.048122   48885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:41:24.065452   48885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:41:24.079621   48885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:41:24.253972   48885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:41:24.395024   48885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:41:24.409746   48885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:41:24.429096   48885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1007 11:41:24.429162   48885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:41:24.441966   48885 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:41:24.442025   48885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:41:24.457054   48885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:41:24.468568   48885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:41:24.481943   48885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:41:24.493761   48885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:41:24.503741   48885 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 11:41:24.503796   48885 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 11:41:24.519275   48885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:41:24.529767   48885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:41:24.685825   48885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:41:24.793261   48885 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:41:24.793348   48885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:41:24.798633   48885 start.go:563] Will wait 60s for crictl version
	I1007 11:41:24.798694   48885 ssh_runner.go:195] Run: which crictl
	I1007 11:41:24.802873   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:41:24.847839   48885 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 11:41:24.847915   48885 ssh_runner.go:195] Run: crio --version
	I1007 11:41:24.885088   48885 ssh_runner.go:195] Run: crio --version
	I1007 11:41:24.924689   48885 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1007 11:41:24.926103   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetIP
	I1007 11:41:24.929008   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:24.929357   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:41:14 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:41:24.929400   48885 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:41:24.929552   48885 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 11:41:24.934360   48885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:41:24.947449   48885 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-852078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.20.0 ClusterName:kubernetes-upgrade-852078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:41:24.947629   48885 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 11:41:24.947716   48885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:41:24.979259   48885 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1007 11:41:24.979352   48885 ssh_runner.go:195] Run: which lz4
	I1007 11:41:24.983901   48885 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 11:41:24.988386   48885 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 11:41:24.988418   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1007 11:41:26.742929   48885 crio.go:462] duration metric: took 1.759064542s to copy over tarball
	I1007 11:41:26.743043   48885 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 11:41:29.333270   48885 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.59019654s)
	I1007 11:41:29.333298   48885 crio.go:469] duration metric: took 2.590339552s to extract the tarball
	I1007 11:41:29.333307   48885 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 11:41:29.376954   48885 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:41:29.430048   48885 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1007 11:41:29.430079   48885 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 11:41:29.430160   48885 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 11:41:29.430191   48885 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 11:41:29.430208   48885 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1007 11:41:29.430252   48885 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 11:41:29.430353   48885 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 11:41:29.430456   48885 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1007 11:41:29.430468   48885 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1007 11:41:29.430488   48885 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 11:41:29.431835   48885 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 11:41:29.431901   48885 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 11:41:29.431906   48885 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 11:41:29.431915   48885 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1007 11:41:29.431833   48885 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 11:41:29.431936   48885 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 11:41:29.431974   48885 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1007 11:41:29.431977   48885 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1007 11:41:29.601729   48885 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 11:41:29.609783   48885 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1007 11:41:29.649993   48885 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1007 11:41:29.650038   48885 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 11:41:29.650084   48885 ssh_runner.go:195] Run: which crictl
	I1007 11:41:29.662839   48885 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1007 11:41:29.662886   48885 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1007 11:41:29.662893   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 11:41:29.662914   48885 ssh_runner.go:195] Run: which crictl
	I1007 11:41:29.667152   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 11:41:29.674828   48885 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1007 11:41:29.719126   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 11:41:29.719275   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 11:41:29.756406   48885 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1007 11:41:29.756462   48885 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1007 11:41:29.756527   48885 ssh_runner.go:195] Run: which crictl
	I1007 11:41:29.788875   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 11:41:29.788890   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 11:41:29.788875   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 11:41:29.789343   48885 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1007 11:41:29.814786   48885 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1007 11:41:29.843552   48885 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1007 11:41:29.890852   48885 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1007 11:41:29.906604   48885 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1007 11:41:29.906663   48885 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1007 11:41:29.906687   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 11:41:29.919955   48885 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1007 11:41:29.920008   48885 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 11:41:29.920057   48885 ssh_runner.go:195] Run: which crictl
	I1007 11:41:29.953643   48885 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1007 11:41:29.953693   48885 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 11:41:29.953749   48885 ssh_runner.go:195] Run: which crictl
	I1007 11:41:29.956837   48885 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1007 11:41:29.956882   48885 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 11:41:29.956929   48885 ssh_runner.go:195] Run: which crictl
	I1007 11:41:29.985720   48885 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1007 11:41:29.985775   48885 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1007 11:41:29.985824   48885 ssh_runner.go:195] Run: which crictl
	I1007 11:41:29.991212   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 11:41:29.991256   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 11:41:29.991211   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 11:41:29.991294   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 11:41:29.993469   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 11:41:30.101595   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 11:41:30.101622   48885 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1007 11:41:30.101626   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 11:41:30.101660   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 11:41:30.101712   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 11:41:30.194773   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 11:41:30.194854   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 11:41:30.194921   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 11:41:30.195011   48885 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 11:41:30.265964   48885 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1007 11:41:30.280935   48885 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1007 11:41:30.280994   48885 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1007 11:41:30.291733   48885 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1007 11:41:30.634639   48885 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 11:41:30.783276   48885 cache_images.go:92] duration metric: took 1.353178197s to LoadCachedImages
	W1007 11:41:30.783367   48885 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19761-3912/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1007 11:41:30.783389   48885 kubeadm.go:934] updating node { 192.168.39.196 8443 v1.20.0 crio true true} ...
	I1007 11:41:30.783506   48885 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-852078 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-852078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:41:30.783607   48885 ssh_runner.go:195] Run: crio config
	I1007 11:41:30.837209   48885 cni.go:84] Creating CNI manager for ""
	I1007 11:41:30.837234   48885 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:41:30.837245   48885 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:41:30.837266   48885 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-852078 NodeName:kubernetes-upgrade-852078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1007 11:41:30.837406   48885 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-852078"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:41:30.837482   48885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1007 11:41:30.850839   48885 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:41:30.850920   48885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 11:41:30.861145   48885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1007 11:41:30.881212   48885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:41:30.898480   48885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1007 11:41:30.915762   48885 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I1007 11:41:30.919671   48885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:41:30.932678   48885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:41:31.056176   48885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:41:31.074319   48885 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078 for IP: 192.168.39.196
	I1007 11:41:31.074338   48885 certs.go:194] generating shared ca certs ...
	I1007 11:41:31.074352   48885 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:41:31.074519   48885 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 11:41:31.074574   48885 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 11:41:31.074587   48885 certs.go:256] generating profile certs ...
	I1007 11:41:31.074651   48885 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/client.key
	I1007 11:41:31.074671   48885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/client.crt with IP's: []
	I1007 11:41:31.575065   48885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/client.crt ...
	I1007 11:41:31.575095   48885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/client.crt: {Name:mk649c2ac1f30cd7060d9390f774e5e8e394bed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:41:31.575280   48885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/client.key ...
	I1007 11:41:31.575301   48885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/client.key: {Name:mke123c9e53b64819dd0146e39530371ba05f878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:41:31.575414   48885 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/apiserver.key.8a2bf1db
	I1007 11:41:31.575443   48885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/apiserver.crt.8a2bf1db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196]
	I1007 11:41:31.840799   48885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/apiserver.crt.8a2bf1db ...
	I1007 11:41:31.840828   48885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/apiserver.crt.8a2bf1db: {Name:mk17ec4206f461ed7e2258d6a7a909e2b49c5ebe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:41:31.840991   48885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/apiserver.key.8a2bf1db ...
	I1007 11:41:31.841011   48885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/apiserver.key.8a2bf1db: {Name:mkae3b51cfd50fb4fe0ffd23afcb4fdba232f958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:41:31.841102   48885 certs.go:381] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/apiserver.crt.8a2bf1db -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/apiserver.crt
	I1007 11:41:31.841233   48885 certs.go:385] copying /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/apiserver.key.8a2bf1db -> /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/apiserver.key
	I1007 11:41:31.841326   48885 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/proxy-client.key
	I1007 11:41:31.841348   48885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/proxy-client.crt with IP's: []
	I1007 11:41:31.936593   48885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/proxy-client.crt ...
	I1007 11:41:31.936629   48885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/proxy-client.crt: {Name:mke4430429838a83edb2e6163e64b220e073ecff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:41:31.936823   48885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/proxy-client.key ...
	I1007 11:41:31.936841   48885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/proxy-client.key: {Name:mk858b24ffbcbb8e77467656423c4c998128dcde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:41:31.937037   48885 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 11:41:31.937089   48885 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 11:41:31.937103   48885 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 11:41:31.937140   48885 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 11:41:31.937176   48885 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:41:31.937209   48885 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 11:41:31.937273   48885 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 11:41:31.937892   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:41:31.967000   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 11:41:31.993649   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:41:32.025586   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 11:41:32.054804   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 11:41:32.079317   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 11:41:32.103215   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:41:32.128216   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kubernetes-upgrade-852078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 11:41:32.153217   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 11:41:32.176935   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 11:41:32.201211   48885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:41:32.226358   48885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:41:32.244130   48885 ssh_runner.go:195] Run: openssl version
	I1007 11:41:32.250147   48885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:41:32.261475   48885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:41:32.266087   48885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:41:32.266143   48885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:41:32.272227   48885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:41:32.284140   48885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 11:41:32.295561   48885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 11:41:32.300226   48885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 11:41:32.300294   48885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 11:41:32.306443   48885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 11:41:32.318490   48885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 11:41:32.339377   48885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 11:41:32.346344   48885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 11:41:32.346416   48885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 11:41:32.353006   48885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 11:41:32.366915   48885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:41:32.372041   48885 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 11:41:32.372133   48885 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-852078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.20.0 ClusterName:kubernetes-upgrade-852078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:41:32.372240   48885 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 11:41:32.372309   48885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:41:32.417366   48885 cri.go:89] found id: ""
	I1007 11:41:32.417452   48885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 11:41:32.428226   48885 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 11:41:32.438722   48885 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 11:41:32.449328   48885 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 11:41:32.449357   48885 kubeadm.go:157] found existing configuration files:
	
	I1007 11:41:32.449406   48885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 11:41:32.459803   48885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 11:41:32.459882   48885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 11:41:32.471029   48885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 11:41:32.481682   48885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 11:41:32.481751   48885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 11:41:32.492164   48885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 11:41:32.502256   48885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 11:41:32.502337   48885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 11:41:32.512511   48885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 11:41:32.522935   48885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 11:41:32.522987   48885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 11:41:32.533800   48885 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 11:41:32.836309   48885 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 11:43:31.478886   48885 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 11:43:31.479023   48885 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 11:43:31.480602   48885 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 11:43:31.480673   48885 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 11:43:31.480785   48885 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 11:43:31.480911   48885 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 11:43:31.481077   48885 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 11:43:31.481170   48885 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 11:43:31.483207   48885 out.go:235]   - Generating certificates and keys ...
	I1007 11:43:31.483313   48885 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 11:43:31.483406   48885 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 11:43:31.483536   48885 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 11:43:31.483624   48885 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 11:43:31.483679   48885 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 11:43:31.483753   48885 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 11:43:31.483833   48885 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 11:43:31.484015   48885 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-852078 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	I1007 11:43:31.484095   48885 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 11:43:31.484284   48885 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-852078 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	I1007 11:43:31.484385   48885 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 11:43:31.484494   48885 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 11:43:31.484565   48885 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 11:43:31.484654   48885 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 11:43:31.484725   48885 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 11:43:31.484804   48885 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 11:43:31.484872   48885 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 11:43:31.484919   48885 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 11:43:31.485041   48885 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 11:43:31.485168   48885 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 11:43:31.485227   48885 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 11:43:31.485310   48885 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 11:43:31.486868   48885 out.go:235]   - Booting up control plane ...
	I1007 11:43:31.486954   48885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 11:43:31.487038   48885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 11:43:31.487151   48885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 11:43:31.487241   48885 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 11:43:31.487433   48885 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 11:43:31.487499   48885 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 11:43:31.487599   48885 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 11:43:31.487883   48885 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 11:43:31.487997   48885 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 11:43:31.488260   48885 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 11:43:31.488359   48885 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 11:43:31.488648   48885 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 11:43:31.488756   48885 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 11:43:31.489033   48885 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 11:43:31.489090   48885 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 11:43:31.489261   48885 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 11:43:31.489274   48885 kubeadm.go:310] 
	I1007 11:43:31.489325   48885 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 11:43:31.489360   48885 kubeadm.go:310] 		timed out waiting for the condition
	I1007 11:43:31.489369   48885 kubeadm.go:310] 
	I1007 11:43:31.489427   48885 kubeadm.go:310] 	This error is likely caused by:
	I1007 11:43:31.489460   48885 kubeadm.go:310] 		- The kubelet is not running
	I1007 11:43:31.489601   48885 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 11:43:31.489611   48885 kubeadm.go:310] 
	I1007 11:43:31.489736   48885 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 11:43:31.489784   48885 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 11:43:31.489818   48885 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 11:43:31.489832   48885 kubeadm.go:310] 
	I1007 11:43:31.489940   48885 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 11:43:31.490045   48885 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 11:43:31.490064   48885 kubeadm.go:310] 
	I1007 11:43:31.490196   48885 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 11:43:31.490320   48885 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 11:43:31.490413   48885 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 11:43:31.490487   48885 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 11:43:31.490529   48885 kubeadm.go:310] 
	W1007 11:43:31.490599   48885 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-852078 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-852078 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-852078 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-852078 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1007 11:43:31.490647   48885 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 11:43:32.441665   48885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:43:32.457271   48885 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 11:43:32.467310   48885 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 11:43:32.467335   48885 kubeadm.go:157] found existing configuration files:
	
	I1007 11:43:32.467388   48885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 11:43:32.476819   48885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 11:43:32.476878   48885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 11:43:32.486334   48885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 11:43:32.498010   48885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 11:43:32.498083   48885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 11:43:32.508524   48885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 11:43:32.517973   48885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 11:43:32.518039   48885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 11:43:32.529094   48885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 11:43:32.538587   48885 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 11:43:32.538644   48885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 11:43:32.549365   48885 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 11:43:32.808596   48885 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 11:45:28.975412   48885 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 11:45:28.975523   48885 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 11:45:28.977794   48885 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 11:45:28.977854   48885 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 11:45:28.977931   48885 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 11:45:28.978036   48885 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 11:45:28.978171   48885 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 11:45:28.978284   48885 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 11:45:28.980617   48885 out.go:235]   - Generating certificates and keys ...
	I1007 11:45:28.980718   48885 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 11:45:28.980835   48885 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 11:45:28.980965   48885 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 11:45:28.981060   48885 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 11:45:28.981163   48885 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 11:45:28.981269   48885 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 11:45:28.981362   48885 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 11:45:28.981451   48885 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 11:45:28.981559   48885 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 11:45:28.981670   48885 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 11:45:28.981731   48885 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 11:45:28.981817   48885 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 11:45:28.981894   48885 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 11:45:28.981969   48885 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 11:45:28.982056   48885 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 11:45:28.982167   48885 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 11:45:28.982321   48885 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 11:45:28.982432   48885 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 11:45:28.982500   48885 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 11:45:28.982591   48885 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 11:45:28.983950   48885 out.go:235]   - Booting up control plane ...
	I1007 11:45:28.984053   48885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 11:45:28.984137   48885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 11:45:28.984235   48885 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 11:45:28.984348   48885 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 11:45:28.984516   48885 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 11:45:28.984583   48885 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 11:45:28.984678   48885 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 11:45:28.984880   48885 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 11:45:28.984976   48885 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 11:45:28.985190   48885 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 11:45:28.985270   48885 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 11:45:28.985451   48885 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 11:45:28.985531   48885 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 11:45:28.985780   48885 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 11:45:28.985863   48885 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 11:45:28.986027   48885 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 11:45:28.986037   48885 kubeadm.go:310] 
	I1007 11:45:28.986094   48885 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 11:45:28.986131   48885 kubeadm.go:310] 		timed out waiting for the condition
	I1007 11:45:28.986140   48885 kubeadm.go:310] 
	I1007 11:45:28.986180   48885 kubeadm.go:310] 	This error is likely caused by:
	I1007 11:45:28.986215   48885 kubeadm.go:310] 		- The kubelet is not running
	I1007 11:45:28.986349   48885 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 11:45:28.986357   48885 kubeadm.go:310] 
	I1007 11:45:28.986494   48885 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 11:45:28.986549   48885 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 11:45:28.986602   48885 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 11:45:28.986612   48885 kubeadm.go:310] 
	I1007 11:45:28.986731   48885 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 11:45:28.986848   48885 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 11:45:28.986859   48885 kubeadm.go:310] 
	I1007 11:45:28.986974   48885 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 11:45:28.987122   48885 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 11:45:28.987244   48885 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 11:45:28.987336   48885 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 11:45:28.987364   48885 kubeadm.go:310] 
	I1007 11:45:28.987413   48885 kubeadm.go:394] duration metric: took 3m56.615284137s to StartCluster
	I1007 11:45:28.987456   48885 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:45:28.987518   48885 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:45:29.044271   48885 cri.go:89] found id: ""
	I1007 11:45:29.044314   48885 logs.go:282] 0 containers: []
	W1007 11:45:29.044325   48885 logs.go:284] No container was found matching "kube-apiserver"
	I1007 11:45:29.044337   48885 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:45:29.044398   48885 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:45:29.087745   48885 cri.go:89] found id: ""
	I1007 11:45:29.087774   48885 logs.go:282] 0 containers: []
	W1007 11:45:29.087793   48885 logs.go:284] No container was found matching "etcd"
	I1007 11:45:29.087800   48885 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:45:29.087862   48885 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:45:29.131027   48885 cri.go:89] found id: ""
	I1007 11:45:29.131054   48885 logs.go:282] 0 containers: []
	W1007 11:45:29.131065   48885 logs.go:284] No container was found matching "coredns"
	I1007 11:45:29.131073   48885 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:45:29.131132   48885 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:45:29.176758   48885 cri.go:89] found id: ""
	I1007 11:45:29.176786   48885 logs.go:282] 0 containers: []
	W1007 11:45:29.176797   48885 logs.go:284] No container was found matching "kube-scheduler"
	I1007 11:45:29.176805   48885 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:45:29.176867   48885 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:45:29.222457   48885 cri.go:89] found id: ""
	I1007 11:45:29.222489   48885 logs.go:282] 0 containers: []
	W1007 11:45:29.222499   48885 logs.go:284] No container was found matching "kube-proxy"
	I1007 11:45:29.222505   48885 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:45:29.222570   48885 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:45:29.269577   48885 cri.go:89] found id: ""
	I1007 11:45:29.269602   48885 logs.go:282] 0 containers: []
	W1007 11:45:29.269611   48885 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 11:45:29.269619   48885 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:45:29.269677   48885 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:45:29.314420   48885 cri.go:89] found id: ""
	I1007 11:45:29.314443   48885 logs.go:282] 0 containers: []
	W1007 11:45:29.314450   48885 logs.go:284] No container was found matching "kindnet"
	I1007 11:45:29.314460   48885 logs.go:123] Gathering logs for dmesg ...
	I1007 11:45:29.314471   48885 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:45:29.331950   48885 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:45:29.332008   48885 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 11:45:29.467839   48885 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 11:45:29.467864   48885 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:45:29.467884   48885 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:45:29.592830   48885 logs.go:123] Gathering logs for container status ...
	I1007 11:45:29.592869   48885 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:45:29.635620   48885 logs.go:123] Gathering logs for kubelet ...
	I1007 11:45:29.635662   48885 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 11:45:29.691557   48885 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1007 11:45:29.691636   48885 out.go:270] * 
	* 
	W1007 11:45:29.691695   48885 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 11:45:29.691712   48885 out.go:270] * 
	* 
	W1007 11:45:29.692789   48885 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 11:45:29.910288   48885 out.go:201] 
	W1007 11:45:30.182668   48885 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 11:45:30.182736   48885 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1007 11:45:30.182767   48885 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1007 11:45:30.359818   48885 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-852078 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-852078
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-852078: (6.569208536s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-852078 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-852078 status --format={{.Host}}: exit status 7 (64.352254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-852078 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-852078 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.964752555s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-852078 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-852078 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-852078 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (90.40769ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-852078] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-852078
	    minikube start -p kubernetes-upgrade-852078 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8520782 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-852078 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-852078 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-852078 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.687066391s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-07 11:47:58.955461571 +0000 UTC m=+5181.138247591
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-852078 -n kubernetes-upgrade-852078
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-852078 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-852078 logs -n 25: (1.725913623s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-167819 sudo cat              | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-167819 sudo cat              | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-167819 sudo                  | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-167819 sudo                  | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-167819 sudo                  | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-167819 sudo find             | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-167819 sudo crio             | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-167819                       | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC | 07 Oct 24 11:44 UTC |
	| start   | -p cert-expiration-658191              | cert-expiration-658191    | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC | 07 Oct 24 11:45 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-056919              | running-upgrade-056919    | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC | 07 Oct 24 11:44 UTC |
	| start   | -p force-systemd-flag-468078           | force-systemd-flag-468078 | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC | 07 Oct 24 11:46 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-264062            | force-systemd-env-264062  | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:45 UTC |
	| start   | -p cert-options-495675                 | cert-options-495675       | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:46 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-852078           | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:45 UTC |
	| start   | -p kubernetes-upgrade-852078           | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:46 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-468078 ssh cat      | force-systemd-flag-468078 | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-468078           | force-systemd-flag-468078 | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	| start   | -p pause-328632 --memory=2048          | pause-328632              | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:47 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-495675 ssh                | cert-options-495675       | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-495675 -- sudo         | cert-options-495675       | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-495675                 | cert-options-495675       | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	| start   | -p auto-167819 --memory=3072           | auto-167819               | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-852078           | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-852078           | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:47 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-328632                        | pause-328632              | jenkins | v1.34.0 | 07 Oct 24 11:47 UTC |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:47:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:47:58.235072   56886 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:47:58.235372   56886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:47:58.235384   56886 out.go:358] Setting ErrFile to fd 2...
	I1007 11:47:58.235390   56886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:47:58.235630   56886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 11:47:58.236587   56886 out.go:352] Setting JSON to false
	I1007 11:47:58.237840   56886 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5372,"bootTime":1728296306,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:47:58.237973   56886 start.go:139] virtualization: kvm guest
	I1007 11:47:58.240549   56886 out.go:177] * [pause-328632] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:47:58.242167   56886 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 11:47:58.242180   56886 notify.go:220] Checking for updates...
	I1007 11:47:58.245024   56886 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:47:58.246347   56886 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 11:47:58.247570   56886 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:47:58.248781   56886 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:47:58.250027   56886 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:47:58.251821   56886 config.go:182] Loaded profile config "pause-328632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:47:58.252247   56886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:47:58.252303   56886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:47:58.271936   56886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40041
	I1007 11:47:58.272492   56886 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:47:58.273099   56886 main.go:141] libmachine: Using API Version  1
	I1007 11:47:58.273123   56886 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:47:58.273561   56886 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:47:58.273771   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:47:58.274035   56886 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:47:58.274374   56886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:47:58.274412   56886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:47:58.291571   56886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I1007 11:47:58.292132   56886 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:47:58.292705   56886 main.go:141] libmachine: Using API Version  1
	I1007 11:47:58.292751   56886 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:47:58.293161   56886 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:47:58.293413   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:47:58.385465   56886 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 11:47:58.410004   56886 start.go:297] selected driver: kvm2
	I1007 11:47:58.410029   56886 start.go:901] validating driver "kvm2" against &{Name:pause-328632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:pause-328632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:47:58.410216   56886 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:47:58.410570   56886 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:47:58.410651   56886 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 11:47:58.427375   56886 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 11:47:58.428469   56886 cni.go:84] Creating CNI manager for ""
	I1007 11:47:58.428541   56886 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:47:58.428612   56886 start.go:340] cluster config:
	{Name:pause-328632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-328632 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-al
iases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:47:58.428783   56886 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:47:58.430863   56886 out.go:177] * Starting "pause-328632" primary control-plane node in "pause-328632" cluster
	I1007 11:47:58.432522   56886 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:47:58.432571   56886 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 11:47:58.432585   56886 cache.go:56] Caching tarball of preloaded images
	I1007 11:47:58.432679   56886 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 11:47:58.432692   56886 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:47:58.432865   56886 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/config.json ...
	I1007 11:47:58.433089   56886 start.go:360] acquireMachinesLock for pause-328632: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 11:47:58.433143   56886 start.go:364] duration metric: took 32.907µs to acquireMachinesLock for "pause-328632"
	I1007 11:47:58.433163   56886 start.go:96] Skipping create...Using existing machine configuration
	I1007 11:47:58.433171   56886 fix.go:54] fixHost starting: 
	I1007 11:47:58.433478   56886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:47:58.433516   56886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:47:58.451125   56886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I1007 11:47:58.451707   56886 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:47:58.452298   56886 main.go:141] libmachine: Using API Version  1
	I1007 11:47:58.452327   56886 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:47:58.452654   56886 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:47:58.452862   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:47:58.453041   56886 main.go:141] libmachine: (pause-328632) Calling .GetState
	I1007 11:47:58.454743   56886 fix.go:112] recreateIfNeeded on pause-328632: state=Running err=<nil>
	W1007 11:47:58.454779   56886 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 11:47:58.457879   56886 out.go:177] * Updating the running kvm2 "pause-328632" VM ...
	I1007 11:47:55.324723   56038 pod_ready.go:103] pod "coredns-7c65d6cfc9-4wtfv" in "kube-system" namespace has status "Ready":"False"
	I1007 11:47:57.825532   56038 pod_ready.go:103] pod "coredns-7c65d6cfc9-4wtfv" in "kube-system" namespace has status "Ready":"False"
	I1007 11:47:57.808974   56309 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:47:57.808993   56309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 11:47:57.809012   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:47:57.812600   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:47:57.813337   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:46:26 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:47:57.813363   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:47:57.813517   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHPort
	I1007 11:47:57.813694   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:47:57.813898   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHUsername
	I1007 11:47:57.814040   56309 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078/id_rsa Username:docker}
	I1007 11:47:57.823816   56309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46625
	I1007 11:47:57.824359   56309 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:47:57.824791   56309 main.go:141] libmachine: Using API Version  1
	I1007 11:47:57.824823   56309 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:47:57.825636   56309 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:47:57.825833   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetState
	I1007 11:47:57.827453   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .DriverName
	I1007 11:47:57.827774   56309 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 11:47:57.827791   56309 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 11:47:57.827809   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHHostname
	I1007 11:47:57.830497   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:47:57.830867   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:c0:4b", ip: ""} in network mk-kubernetes-upgrade-852078: {Iface:virbr1 ExpiryTime:2024-10-07 12:46:26 +0000 UTC Type:0 Mac:52:54:00:29:c0:4b Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:kubernetes-upgrade-852078 Clientid:01:52:54:00:29:c0:4b}
	I1007 11:47:57.830900   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | domain kubernetes-upgrade-852078 has defined IP address 192.168.39.196 and MAC address 52:54:00:29:c0:4b in network mk-kubernetes-upgrade-852078
	I1007 11:47:57.831029   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHPort
	I1007 11:47:57.831209   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHKeyPath
	I1007 11:47:57.831359   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .GetSSHUsername
	I1007 11:47:57.831483   56309 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/kubernetes-upgrade-852078/id_rsa Username:docker}
	I1007 11:47:57.977537   56309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:47:58.004869   56309 api_server.go:52] waiting for apiserver process to appear ...
	I1007 11:47:58.004957   56309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 11:47:58.020660   56309 api_server.go:72] duration metric: took 260.420646ms to wait for apiserver process to appear ...
	I1007 11:47:58.020688   56309 api_server.go:88] waiting for apiserver healthz status ...
	I1007 11:47:58.020706   56309 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I1007 11:47:58.026700   56309 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I1007 11:47:58.027754   56309 api_server.go:141] control plane version: v1.31.1
	I1007 11:47:58.027775   56309 api_server.go:131] duration metric: took 7.081383ms to wait for apiserver health ...
	I1007 11:47:58.027783   56309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 11:47:58.035073   56309 system_pods.go:59] 8 kube-system pods found
	I1007 11:47:58.035100   56309 system_pods.go:61] "coredns-7c65d6cfc9-cx8d7" [69a73ac4-6e9b-4fd3-ac84-62facf8c3b08] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 11:47:58.035107   56309 system_pods.go:61] "coredns-7c65d6cfc9-ppjfx" [a57e6c58-71b4-4e72-844a-7da9f2f41bec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 11:47:58.035116   56309 system_pods.go:61] "etcd-kubernetes-upgrade-852078" [beb61349-2a41-40d6-9a18-8addb7c03967] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1007 11:47:58.035122   56309 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-852078" [89ef6fb1-bd71-448e-9b3c-93ce698a95d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 11:47:58.035130   56309 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-852078" [5ca5d852-135a-49b1-b1d5-284b22dadbb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 11:47:58.035135   56309 system_pods.go:61] "kube-proxy-f86nz" [195388ee-ba4b-4150-beec-af13160df9d3] Running
	I1007 11:47:58.035142   56309 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-852078" [b2deddf1-3cfb-428e-8688-ae97ca05f7cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1007 11:47:58.035145   56309 system_pods.go:61] "storage-provisioner" [51dc59cd-8830-461f-8797-4d846cd3a5cc] Running
	I1007 11:47:58.035150   56309 system_pods.go:74] duration metric: took 7.362817ms to wait for pod list to return data ...
	I1007 11:47:58.035158   56309 kubeadm.go:582] duration metric: took 274.927753ms to wait for: map[apiserver:true system_pods:true]
	I1007 11:47:58.035171   56309 node_conditions.go:102] verifying NodePressure condition ...
	I1007 11:47:58.038121   56309 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 11:47:58.038149   56309 node_conditions.go:123] node cpu capacity is 2
	I1007 11:47:58.038161   56309 node_conditions.go:105] duration metric: took 2.985361ms to run NodePressure ...
	I1007 11:47:58.038175   56309 start.go:241] waiting for startup goroutines ...
	I1007 11:47:58.072312   56309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:47:58.076289   56309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 11:47:58.858645   56309 main.go:141] libmachine: Making call to close driver server
	I1007 11:47:58.858671   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .Close
	I1007 11:47:58.858702   56309 main.go:141] libmachine: Making call to close driver server
	I1007 11:47:58.858724   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .Close
	I1007 11:47:58.858975   56309 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:47:58.858992   56309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:47:58.859002   56309 main.go:141] libmachine: Making call to close driver server
	I1007 11:47:58.859011   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .Close
	I1007 11:47:58.859188   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Closing plugin on server side
	I1007 11:47:58.859210   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Closing plugin on server side
	I1007 11:47:58.859212   56309 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:47:58.859223   56309 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:47:58.859235   56309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:47:58.859225   56309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:47:58.859308   56309 main.go:141] libmachine: Making call to close driver server
	I1007 11:47:58.859323   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .Close
	I1007 11:47:58.859604   56309 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:47:58.859629   56309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:47:58.866771   56309 main.go:141] libmachine: Making call to close driver server
	I1007 11:47:58.866795   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) Calling .Close
	I1007 11:47:58.868077   56309 main.go:141] libmachine: (kubernetes-upgrade-852078) DBG | Closing plugin on server side
	I1007 11:47:58.868106   56309 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:47:58.868120   56309 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:47:58.870568   56309 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 11:47:58.872082   56309 addons.go:510] duration metric: took 1.111823902s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 11:47:58.872124   56309 start.go:246] waiting for cluster config update ...
	I1007 11:47:58.872135   56309 start.go:255] writing updated cluster config ...
	I1007 11:47:58.872470   56309 ssh_runner.go:195] Run: rm -f paused
	I1007 11:47:58.935424   56309 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 11:47:58.937670   56309 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-852078" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.726118227Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301679726042016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39481056-597d-4afe-a06f-9d5ceebcfce8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.726617409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=588ccc96-354b-426d-9d38-5c5ec4daf751 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.726672962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=588ccc96-354b-426d-9d38-5c5ec4daf751 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.727118307Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18b1b110373c56078a5149341d741b0073a90d6f6ca5734149e28d41add9963d,PodSandboxId:94384c47d6bb5afdd1212effc91c2c940d8e54599ff249a6eecbe22637559b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728301676690999384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cx8d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69a73ac4-6e9b-4fd3-ac84-62facf8c3b08,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31d547893e5d12feb0cc6821c159a2f04d3dc64b03a63f8959c83d9149c566b,PodSandboxId:226a268bc165d3f5daee2356074ff71d4d2e068044eb1cdd31681c0d7956f277,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728301676701817282,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppjfx,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a57e6c58-71b4-4e72-844a-7da9f2f41bec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09050343ce53c495ea8ea4919861d0cddb7b0520b06364e40056325c3fb7d0df,PodSandboxId:bc33cdeb4b841f1be60bd92c392cdca9509de8c160b27c56a8e86191dac232d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAIN
ER_RUNNING,CreatedAt:1728301676689482998,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f86nz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195388ee-ba4b-4150-beec-af13160df9d3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d16e7a1642a56a09be9ffff8aa471862b66684f10360b6a5f7361e4bcdf5ac,PodSandboxId:68fe6df8ffd93749e55730d3eb70e9e7c7e21a700a733db4e69dfacf2ed13ccb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
8301676638412113,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51dc59cd-8830-461f-8797-4d846cd3a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4241184a78c32c6424fb7ffc9613f7d78750e98915c5c8977f21613abd692796,PodSandboxId:ca01598fec56ab82b1cee3d3f0e8e3ed35d365ffc0c7e4382264e32643fae60a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728301671829814252,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49ee3a33ceaa034cab7b4a921d8cba8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7c0c701c3e2f69de3bc5d37d0c75aaca72a8977e10b6d9f759d01b7ce267a3,PodSandboxId:ae6d35dd51272e65c598b58f4f3365bc90aecb10b5b6350030729ccfc5a48044,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728301671810449010,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b46d8ef7e8a3c59993f966f337dc391,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7d008ff95f44084bc43eaae46c6aadb856f2e63200a3f17d819d3e02762078,PodSandboxId:619fa039d67695cd090076743cbabf6735456874f3f8da8f302d24c31befee87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728301671799849192,Labels:map[st
ring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3da50c92dfe9988ea72577fa07c5395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da54cb77064d018faef50c732ed2c87b284c4bef13d5c2b47c23b5d08ab1be8,PodSandboxId:68fe6df8ffd93749e55730d3eb70e9e7c7e21a700a733db4e69dfacf2ed13ccb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728301669471187239,Labels:map[str
ing]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51dc59cd-8830-461f-8797-4d846cd3a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815b53fecd7760ebf311850137704e22eb2eb2159ed567c642b2790717964c26,PodSandboxId:c71505b94b0dd8f53aeeb3ed7ce2b41eac9bae7636b2affe0a270d4b9c08e507,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728301669456527495,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcd31d366c568836653b45bd1369f924,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c30f7056dae3f128d2042adf1c32381f46bc733669633e7542a3d035f40770b,PodSandboxId:226a268bc165d3f5daee2356074ff71d4d2e068044eb1cdd31681c0d7956f277,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728301654325400553,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppjfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57e6c58-71b4-4e72-844a-7da9f2f41bec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3314f7eaf6a6bf8bbe66e366d0bbd1abc6096c5073a657e6209c1afddbeae2e,PodSandboxId:94384c47d6bb5afdd1212effc91c2c940d8e54599ff249a6eecbe22637559b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728301654239610062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cx8d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69a73ac4-6e9b-4fd3-ac84-62facf8c3b08,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d96e98772b5c726c2d0562a416d93ff5a7960c90d8af28bc83bcc108935e334,PodSandboxId:999397cba30c5df3fca2231ac9bd7f7b8bdde109b47d9def7581adadff0
b6b77,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728301650647590464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f86nz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195388ee-ba4b-4150-beec-af13160df9d3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1d2f99b8b01c1e11c5a5df8bbb0bf44934a70d5d7d0609414a416d8a49c072,PodSandboxId:0f91ef3479fc93aeb0f4bee02919c07f6b7527357fc3cdcd5eac3d55b067fdd4,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728301650595878477,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49ee3a33ceaa034cab7b4a921d8cba8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc42540b8528bfe207e0c8c44721d89be792934c24ec1e22f889ee5298a3ac97,PodSandboxId:f0ab26664b8f7451a89a665a872043b21fa2727d9503d5fac06b688899f30619,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728301650486725598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b46d8ef7e8a3c59993f966f337dc391,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366d5ffe7a3d9966bf690c9066de027de029803785285a216174394da317749b,PodSandboxId:5efb0d2e116d82df5141fdac966c579fad5d67e7a0184518b78539a04124336f,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728301650488787017,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3da50c92dfe9988ea72577fa07c5395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9221c394ce59fd9c26f84b5b40c69dfe3cf9c4d24c49263902699a618527c544,PodSandboxId:97b0411d4ca5672b41e9663509fc0a26f04f40a2006f34dc5e8c973abefe6ef3,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728301650092385534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcd31d366c568836653b45bd1369f924,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=588ccc96-354b-426d-9d38-5c5ec4daf751 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.776227512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad2c7acf-5ddb-4ab2-9935-0322c4a529cb name=/runtime.v1.RuntimeService/Version
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.776623042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad2c7acf-5ddb-4ab2-9935-0322c4a529cb name=/runtime.v1.RuntimeService/Version
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.778228560Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1bdcb27-d2da-4a37-a09f-333c33893f42 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.778612789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301679778592568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1bdcb27-d2da-4a37-a09f-333c33893f42 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.779124108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48161019-7dbc-4c12-a198-cff5440ffc20 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.779175435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48161019-7dbc-4c12-a198-cff5440ffc20 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.779562655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18b1b110373c56078a5149341d741b0073a90d6f6ca5734149e28d41add9963d,PodSandboxId:94384c47d6bb5afdd1212effc91c2c940d8e54599ff249a6eecbe22637559b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728301676690999384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cx8d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69a73ac4-6e9b-4fd3-ac84-62facf8c3b08,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31d547893e5d12feb0cc6821c159a2f04d3dc64b03a63f8959c83d9149c566b,PodSandboxId:226a268bc165d3f5daee2356074ff71d4d2e068044eb1cdd31681c0d7956f277,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728301676701817282,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppjfx,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a57e6c58-71b4-4e72-844a-7da9f2f41bec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09050343ce53c495ea8ea4919861d0cddb7b0520b06364e40056325c3fb7d0df,PodSandboxId:bc33cdeb4b841f1be60bd92c392cdca9509de8c160b27c56a8e86191dac232d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAIN
ER_RUNNING,CreatedAt:1728301676689482998,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f86nz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195388ee-ba4b-4150-beec-af13160df9d3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d16e7a1642a56a09be9ffff8aa471862b66684f10360b6a5f7361e4bcdf5ac,PodSandboxId:68fe6df8ffd93749e55730d3eb70e9e7c7e21a700a733db4e69dfacf2ed13ccb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
8301676638412113,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51dc59cd-8830-461f-8797-4d846cd3a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4241184a78c32c6424fb7ffc9613f7d78750e98915c5c8977f21613abd692796,PodSandboxId:ca01598fec56ab82b1cee3d3f0e8e3ed35d365ffc0c7e4382264e32643fae60a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728301671829814252,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49ee3a33ceaa034cab7b4a921d8cba8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7c0c701c3e2f69de3bc5d37d0c75aaca72a8977e10b6d9f759d01b7ce267a3,PodSandboxId:ae6d35dd51272e65c598b58f4f3365bc90aecb10b5b6350030729ccfc5a48044,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728301671810449010,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b46d8ef7e8a3c59993f966f337dc391,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7d008ff95f44084bc43eaae46c6aadb856f2e63200a3f17d819d3e02762078,PodSandboxId:619fa039d67695cd090076743cbabf6735456874f3f8da8f302d24c31befee87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728301671799849192,Labels:map[st
ring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3da50c92dfe9988ea72577fa07c5395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da54cb77064d018faef50c732ed2c87b284c4bef13d5c2b47c23b5d08ab1be8,PodSandboxId:68fe6df8ffd93749e55730d3eb70e9e7c7e21a700a733db4e69dfacf2ed13ccb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728301669471187239,Labels:map[str
ing]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51dc59cd-8830-461f-8797-4d846cd3a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815b53fecd7760ebf311850137704e22eb2eb2159ed567c642b2790717964c26,PodSandboxId:c71505b94b0dd8f53aeeb3ed7ce2b41eac9bae7636b2affe0a270d4b9c08e507,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728301669456527495,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcd31d366c568836653b45bd1369f924,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c30f7056dae3f128d2042adf1c32381f46bc733669633e7542a3d035f40770b,PodSandboxId:226a268bc165d3f5daee2356074ff71d4d2e068044eb1cdd31681c0d7956f277,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728301654325400553,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppjfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57e6c58-71b4-4e72-844a-7da9f2f41bec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3314f7eaf6a6bf8bbe66e366d0bbd1abc6096c5073a657e6209c1afddbeae2e,PodSandboxId:94384c47d6bb5afdd1212effc91c2c940d8e54599ff249a6eecbe22637559b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728301654239610062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cx8d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69a73ac4-6e9b-4fd3-ac84-62facf8c3b08,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d96e98772b5c726c2d0562a416d93ff5a7960c90d8af28bc83bcc108935e334,PodSandboxId:999397cba30c5df3fca2231ac9bd7f7b8bdde109b47d9def7581adadff0
b6b77,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728301650647590464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f86nz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195388ee-ba4b-4150-beec-af13160df9d3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1d2f99b8b01c1e11c5a5df8bbb0bf44934a70d5d7d0609414a416d8a49c072,PodSandboxId:0f91ef3479fc93aeb0f4bee02919c07f6b7527357fc3cdcd5eac3d55b067fdd4,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728301650595878477,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49ee3a33ceaa034cab7b4a921d8cba8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc42540b8528bfe207e0c8c44721d89be792934c24ec1e22f889ee5298a3ac97,PodSandboxId:f0ab26664b8f7451a89a665a872043b21fa2727d9503d5fac06b688899f30619,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728301650486725598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b46d8ef7e8a3c59993f966f337dc391,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366d5ffe7a3d9966bf690c9066de027de029803785285a216174394da317749b,PodSandboxId:5efb0d2e116d82df5141fdac966c579fad5d67e7a0184518b78539a04124336f,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728301650488787017,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3da50c92dfe9988ea72577fa07c5395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9221c394ce59fd9c26f84b5b40c69dfe3cf9c4d24c49263902699a618527c544,PodSandboxId:97b0411d4ca5672b41e9663509fc0a26f04f40a2006f34dc5e8c973abefe6ef3,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728301650092385534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcd31d366c568836653b45bd1369f924,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48161019-7dbc-4c12-a198-cff5440ffc20 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.825975838Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67a3af6e-aca8-433a-8c18-60d08369e9a7 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.826110312Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67a3af6e-aca8-433a-8c18-60d08369e9a7 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.827646895Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f473afbd-d7a3-4995-a9ef-abe786ccddde name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.828267361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301679828219615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f473afbd-d7a3-4995-a9ef-abe786ccddde name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.828998604Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6cedc05-7784-4796-8ba3-c649e2d4f513 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.829132114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6cedc05-7784-4796-8ba3-c649e2d4f513 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.830534529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18b1b110373c56078a5149341d741b0073a90d6f6ca5734149e28d41add9963d,PodSandboxId:94384c47d6bb5afdd1212effc91c2c940d8e54599ff249a6eecbe22637559b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728301676690999384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cx8d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69a73ac4-6e9b-4fd3-ac84-62facf8c3b08,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31d547893e5d12feb0cc6821c159a2f04d3dc64b03a63f8959c83d9149c566b,PodSandboxId:226a268bc165d3f5daee2356074ff71d4d2e068044eb1cdd31681c0d7956f277,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728301676701817282,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppjfx,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a57e6c58-71b4-4e72-844a-7da9f2f41bec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09050343ce53c495ea8ea4919861d0cddb7b0520b06364e40056325c3fb7d0df,PodSandboxId:bc33cdeb4b841f1be60bd92c392cdca9509de8c160b27c56a8e86191dac232d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAIN
ER_RUNNING,CreatedAt:1728301676689482998,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f86nz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195388ee-ba4b-4150-beec-af13160df9d3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d16e7a1642a56a09be9ffff8aa471862b66684f10360b6a5f7361e4bcdf5ac,PodSandboxId:68fe6df8ffd93749e55730d3eb70e9e7c7e21a700a733db4e69dfacf2ed13ccb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
8301676638412113,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51dc59cd-8830-461f-8797-4d846cd3a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4241184a78c32c6424fb7ffc9613f7d78750e98915c5c8977f21613abd692796,PodSandboxId:ca01598fec56ab82b1cee3d3f0e8e3ed35d365ffc0c7e4382264e32643fae60a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728301671829814252,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49ee3a33ceaa034cab7b4a921d8cba8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7c0c701c3e2f69de3bc5d37d0c75aaca72a8977e10b6d9f759d01b7ce267a3,PodSandboxId:ae6d35dd51272e65c598b58f4f3365bc90aecb10b5b6350030729ccfc5a48044,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728301671810449010,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b46d8ef7e8a3c59993f966f337dc391,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7d008ff95f44084bc43eaae46c6aadb856f2e63200a3f17d819d3e02762078,PodSandboxId:619fa039d67695cd090076743cbabf6735456874f3f8da8f302d24c31befee87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728301671799849192,Labels:map[st
ring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3da50c92dfe9988ea72577fa07c5395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da54cb77064d018faef50c732ed2c87b284c4bef13d5c2b47c23b5d08ab1be8,PodSandboxId:68fe6df8ffd93749e55730d3eb70e9e7c7e21a700a733db4e69dfacf2ed13ccb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728301669471187239,Labels:map[str
ing]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51dc59cd-8830-461f-8797-4d846cd3a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815b53fecd7760ebf311850137704e22eb2eb2159ed567c642b2790717964c26,PodSandboxId:c71505b94b0dd8f53aeeb3ed7ce2b41eac9bae7636b2affe0a270d4b9c08e507,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728301669456527495,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcd31d366c568836653b45bd1369f924,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c30f7056dae3f128d2042adf1c32381f46bc733669633e7542a3d035f40770b,PodSandboxId:226a268bc165d3f5daee2356074ff71d4d2e068044eb1cdd31681c0d7956f277,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728301654325400553,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppjfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57e6c58-71b4-4e72-844a-7da9f2f41bec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3314f7eaf6a6bf8bbe66e366d0bbd1abc6096c5073a657e6209c1afddbeae2e,PodSandboxId:94384c47d6bb5afdd1212effc91c2c940d8e54599ff249a6eecbe22637559b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728301654239610062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cx8d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69a73ac4-6e9b-4fd3-ac84-62facf8c3b08,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d96e98772b5c726c2d0562a416d93ff5a7960c90d8af28bc83bcc108935e334,PodSandboxId:999397cba30c5df3fca2231ac9bd7f7b8bdde109b47d9def7581adadff0
b6b77,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728301650647590464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f86nz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195388ee-ba4b-4150-beec-af13160df9d3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1d2f99b8b01c1e11c5a5df8bbb0bf44934a70d5d7d0609414a416d8a49c072,PodSandboxId:0f91ef3479fc93aeb0f4bee02919c07f6b7527357fc3cdcd5eac3d55b067fdd4,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728301650595878477,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49ee3a33ceaa034cab7b4a921d8cba8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc42540b8528bfe207e0c8c44721d89be792934c24ec1e22f889ee5298a3ac97,PodSandboxId:f0ab26664b8f7451a89a665a872043b21fa2727d9503d5fac06b688899f30619,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728301650486725598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b46d8ef7e8a3c59993f966f337dc391,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366d5ffe7a3d9966bf690c9066de027de029803785285a216174394da317749b,PodSandboxId:5efb0d2e116d82df5141fdac966c579fad5d67e7a0184518b78539a04124336f,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728301650488787017,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3da50c92dfe9988ea72577fa07c5395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9221c394ce59fd9c26f84b5b40c69dfe3cf9c4d24c49263902699a618527c544,PodSandboxId:97b0411d4ca5672b41e9663509fc0a26f04f40a2006f34dc5e8c973abefe6ef3,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728301650092385534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcd31d366c568836653b45bd1369f924,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6cedc05-7784-4796-8ba3-c649e2d4f513 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.871950682Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d01627d-3b2d-4048-bc69-c69f81ba6d44 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.872126855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d01627d-3b2d-4048-bc69-c69f81ba6d44 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.873472398Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc88e686-1032-462f-a347-fe3c8edf99ba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.873864456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301679873838870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc88e686-1032-462f-a347-fe3c8edf99ba name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.874547234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83af0317-e1bf-40d6-a481-9a5b86548f2f name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.874623197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83af0317-e1bf-40d6-a481-9a5b86548f2f name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:47:59 kubernetes-upgrade-852078 crio[3009]: time="2024-10-07 11:47:59.874983277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18b1b110373c56078a5149341d741b0073a90d6f6ca5734149e28d41add9963d,PodSandboxId:94384c47d6bb5afdd1212effc91c2c940d8e54599ff249a6eecbe22637559b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728301676690999384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cx8d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69a73ac4-6e9b-4fd3-ac84-62facf8c3b08,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31d547893e5d12feb0cc6821c159a2f04d3dc64b03a63f8959c83d9149c566b,PodSandboxId:226a268bc165d3f5daee2356074ff71d4d2e068044eb1cdd31681c0d7956f277,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728301676701817282,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppjfx,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a57e6c58-71b4-4e72-844a-7da9f2f41bec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09050343ce53c495ea8ea4919861d0cddb7b0520b06364e40056325c3fb7d0df,PodSandboxId:bc33cdeb4b841f1be60bd92c392cdca9509de8c160b27c56a8e86191dac232d2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAIN
ER_RUNNING,CreatedAt:1728301676689482998,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f86nz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195388ee-ba4b-4150-beec-af13160df9d3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d16e7a1642a56a09be9ffff8aa471862b66684f10360b6a5f7361e4bcdf5ac,PodSandboxId:68fe6df8ffd93749e55730d3eb70e9e7c7e21a700a733db4e69dfacf2ed13ccb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
8301676638412113,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51dc59cd-8830-461f-8797-4d846cd3a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4241184a78c32c6424fb7ffc9613f7d78750e98915c5c8977f21613abd692796,PodSandboxId:ca01598fec56ab82b1cee3d3f0e8e3ed35d365ffc0c7e4382264e32643fae60a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728301671829814252,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49ee3a33ceaa034cab7b4a921d8cba8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7c0c701c3e2f69de3bc5d37d0c75aaca72a8977e10b6d9f759d01b7ce267a3,PodSandboxId:ae6d35dd51272e65c598b58f4f3365bc90aecb10b5b6350030729ccfc5a48044,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728301671810449010,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b46d8ef7e8a3c59993f966f337dc391,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7d008ff95f44084bc43eaae46c6aadb856f2e63200a3f17d819d3e02762078,PodSandboxId:619fa039d67695cd090076743cbabf6735456874f3f8da8f302d24c31befee87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728301671799849192,Labels:map[st
ring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3da50c92dfe9988ea72577fa07c5395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0da54cb77064d018faef50c732ed2c87b284c4bef13d5c2b47c23b5d08ab1be8,PodSandboxId:68fe6df8ffd93749e55730d3eb70e9e7c7e21a700a733db4e69dfacf2ed13ccb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728301669471187239,Labels:map[str
ing]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51dc59cd-8830-461f-8797-4d846cd3a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815b53fecd7760ebf311850137704e22eb2eb2159ed567c642b2790717964c26,PodSandboxId:c71505b94b0dd8f53aeeb3ed7ce2b41eac9bae7636b2affe0a270d4b9c08e507,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728301669456527495,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcd31d366c568836653b45bd1369f924,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c30f7056dae3f128d2042adf1c32381f46bc733669633e7542a3d035f40770b,PodSandboxId:226a268bc165d3f5daee2356074ff71d4d2e068044eb1cdd31681c0d7956f277,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728301654325400553,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ppjfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a57e6c58-71b4-4e72-844a-7da9f2f41bec,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3314f7eaf6a6bf8bbe66e366d0bbd1abc6096c5073a657e6209c1afddbeae2e,PodSandboxId:94384c47d6bb5afdd1212effc91c2c940d8e54599ff249a6eecbe22637559b3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728301654239610062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cx8d7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69a73ac4-6e9b-4fd3-ac84-62facf8c3b08,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d96e98772b5c726c2d0562a416d93ff5a7960c90d8af28bc83bcc108935e334,PodSandboxId:999397cba30c5df3fca2231ac9bd7f7b8bdde109b47d9def7581adadff0
b6b77,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728301650647590464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f86nz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 195388ee-ba4b-4150-beec-af13160df9d3,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1d2f99b8b01c1e11c5a5df8bbb0bf44934a70d5d7d0609414a416d8a49c072,PodSandboxId:0f91ef3479fc93aeb0f4bee02919c07f6b7527357fc3cdcd5eac3d55b067fdd4,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728301650595878477,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49ee3a33ceaa034cab7b4a921d8cba8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc42540b8528bfe207e0c8c44721d89be792934c24ec1e22f889ee5298a3ac97,PodSandboxId:f0ab26664b8f7451a89a665a872043b21fa2727d9503d5fac06b688899f30619,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728301650486725598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b46d8ef7e8a3c59993f966f337dc391,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366d5ffe7a3d9966bf690c9066de027de029803785285a216174394da317749b,PodSandboxId:5efb0d2e116d82df5141fdac966c579fad5d67e7a0184518b78539a04124336f,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728301650488787017,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3da50c92dfe9988ea72577fa07c5395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9221c394ce59fd9c26f84b5b40c69dfe3cf9c4d24c49263902699a618527c544,PodSandboxId:97b0411d4ca5672b41e9663509fc0a26f04f40a2006f34dc5e8c973abefe6ef3,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728301650092385534,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-852078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcd31d366c568836653b45bd1369f924,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83af0317-e1bf-40d6-a481-9a5b86548f2f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a31d547893e5d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   226a268bc165d       coredns-7c65d6cfc9-ppjfx
	18b1b110373c5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   94384c47d6bb5       coredns-7c65d6cfc9-cx8d7
	09050343ce53c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago       Running             kube-proxy                2                   bc33cdeb4b841       kube-proxy-f86nz
	24d16e7a1642a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   68fe6df8ffd93       storage-provisioner
	4241184a78c32       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago       Running             etcd                      2                   ca01598fec56a       etcd-kubernetes-upgrade-852078
	3b7c0c701c3e2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   8 seconds ago       Running             kube-controller-manager   2                   ae6d35dd51272       kube-controller-manager-kubernetes-upgrade-852078
	1f7d008ff95f4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   8 seconds ago       Running             kube-apiserver            2                   619fa039d6769       kube-apiserver-kubernetes-upgrade-852078
	0da54cb77064d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 seconds ago      Exited              storage-provisioner       2                   68fe6df8ffd93       storage-provisioner
	815b53fecd776       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   10 seconds ago      Running             kube-scheduler            2                   c71505b94b0dd       kube-scheduler-kubernetes-upgrade-852078
	1c30f7056dae3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   25 seconds ago      Exited              coredns                   1                   226a268bc165d       coredns-7c65d6cfc9-ppjfx
	b3314f7eaf6a6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   25 seconds ago      Exited              coredns                   1                   94384c47d6bb5       coredns-7c65d6cfc9-cx8d7
	3d96e98772b5c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   29 seconds ago      Exited              kube-proxy                1                   999397cba30c5       kube-proxy-f86nz
	dd1d2f99b8b01       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   29 seconds ago      Exited              etcd                      1                   0f91ef3479fc9       etcd-kubernetes-upgrade-852078
	366d5ffe7a3d9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   29 seconds ago      Exited              kube-apiserver            1                   5efb0d2e116d8       kube-apiserver-kubernetes-upgrade-852078
	fc42540b8528b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   29 seconds ago      Exited              kube-controller-manager   1                   f0ab26664b8f7       kube-controller-manager-kubernetes-upgrade-852078
	9221c394ce59f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   29 seconds ago      Exited              kube-scheduler            1                   97b0411d4ca56       kube-scheduler-kubernetes-upgrade-852078
	
	
	==> coredns [18b1b110373c56078a5149341d741b0073a90d6f6ca5734149e28d41add9963d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [1c30f7056dae3f128d2042adf1c32381f46bc733669633e7542a3d035f40770b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a31d547893e5d12feb0cc6821c159a2f04d3dc64b03a63f8959c83d9149c566b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [b3314f7eaf6a6bf8bbe66e366d0bbd1abc6096c5073a657e6209c1afddbeae2e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-852078
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-852078
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 11:46:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-852078
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 11:47:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 11:47:55 +0000   Mon, 07 Oct 2024 11:46:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 11:47:55 +0000   Mon, 07 Oct 2024 11:46:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 11:47:55 +0000   Mon, 07 Oct 2024 11:46:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 11:47:55 +0000   Mon, 07 Oct 2024 11:46:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    kubernetes-upgrade-852078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ecb4bdc277a94e8693ca2f9e2fe2a7d4
	  System UUID:                ecb4bdc2-77a9-4e86-93ca-2f9e2fe2a7d4
	  Boot ID:                    eeab5796-1cd5-4ae0-af90-a39b56dd1436
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-cx8d7                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     67s
	  kube-system                 coredns-7c65d6cfc9-ppjfx                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     67s
	  kube-system                 etcd-kubernetes-upgrade-852078                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	  kube-system                 kube-apiserver-kubernetes-upgrade-852078             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-852078    200m (10%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-proxy-f86nz                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-scheduler-kubernetes-upgrade-852078             100m (5%)     0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 66s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 79s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node kubernetes-upgrade-852078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 79s)  kubelet          Node kubernetes-upgrade-852078 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node kubernetes-upgrade-852078 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           68s                node-controller  Node kubernetes-upgrade-852078 event: Registered Node kubernetes-upgrade-852078 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-852078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-852078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-852078 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-852078 event: Registered Node kubernetes-upgrade-852078 in Controller
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.017611] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.064657] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071392] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.188365] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.140758] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.304856] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +4.332367] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +0.058590] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.839767] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +8.139787] systemd-fstab-generator[1234]: Ignoring "noauto" option for root device
	[  +0.096108] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.013547] kauditd_printk_skb: 92 callbacks suppressed
	[Oct 7 11:47] systemd-fstab-generator[2182]: Ignoring "noauto" option for root device
	[  +0.084701] kauditd_printk_skb: 6 callbacks suppressed
	[  +0.102075] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.584166] systemd-fstab-generator[2429]: Ignoring "noauto" option for root device
	[  +0.246958] systemd-fstab-generator[2507]: Ignoring "noauto" option for root device
	[  +0.978471] systemd-fstab-generator[2871]: Ignoring "noauto" option for root device
	[  +1.346538] systemd-fstab-generator[3200]: Ignoring "noauto" option for root device
	[  +8.574981] kauditd_printk_skb: 300 callbacks suppressed
	[  +9.658180] systemd-fstab-generator[4066]: Ignoring "noauto" option for root device
	[  +5.695956] kauditd_printk_skb: 42 callbacks suppressed
	[  +1.089902] systemd-fstab-generator[4548]: Ignoring "noauto" option for root device
	
	
	==> etcd [4241184a78c32c6424fb7ffc9613f7d78750e98915c5c8977f21613abd692796] <==
	{"level":"info","ts":"2024-10-07T11:47:52.364866Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-07T11:47:52.364912Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-07T11:47:52.364923Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-07T11:47:52.369504Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-10-07T11:47:52.369540Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-10-07T11:47:52.372186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 switched to configuration voters=(11623670073473264757)"}
	{"level":"info","ts":"2024-10-07T11:47:52.372277Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","added-peer-id":"a14f9258d3b66c75","added-peer-peer-urls":["https://192.168.39.196:2380"]}
	{"level":"info","ts":"2024-10-07T11:47:52.372381Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:47:52.372427Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:47:54.214570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-07T11:47:54.214699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-07T11:47:54.214748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 received MsgPreVoteResp from a14f9258d3b66c75 at term 2"}
	{"level":"info","ts":"2024-10-07T11:47:54.214782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became candidate at term 3"}
	{"level":"info","ts":"2024-10-07T11:47:54.214807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 received MsgVoteResp from a14f9258d3b66c75 at term 3"}
	{"level":"info","ts":"2024-10-07T11:47:54.214835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became leader at term 3"}
	{"level":"info","ts":"2024-10-07T11:47:54.214867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a14f9258d3b66c75 elected leader a14f9258d3b66c75 at term 3"}
	{"level":"info","ts":"2024-10-07T11:47:54.220683Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:47:54.221771Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:47:54.222188Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a14f9258d3b66c75","local-member-attributes":"{Name:kubernetes-upgrade-852078 ClientURLs:[https://192.168.39.196:2379]}","request-path":"/0/members/a14f9258d3b66c75/attributes","cluster-id":"8309c60c27e527a4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T11:47:54.222387Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:47:54.222692Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T11:47:54.222757Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T11:47:54.222692Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T11:47:54.223638Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:47:54.224663Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.196:2379"}
	
	
	==> etcd [dd1d2f99b8b01c1e11c5a5df8bbb0bf44934a70d5d7d0609414a416d8a49c072] <==
	{"level":"info","ts":"2024-10-07T11:47:31.297860Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-10-07T11:47:31.321997Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","commit-index":411}
	{"level":"info","ts":"2024-10-07T11:47:31.328457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 switched to configuration voters=()"}
	{"level":"info","ts":"2024-10-07T11:47:31.328673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became follower at term 2"}
	{"level":"info","ts":"2024-10-07T11:47:31.328778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft a14f9258d3b66c75 [peers: [], term: 2, commit: 411, applied: 0, lastindex: 411, lastterm: 2]"}
	{"level":"warn","ts":"2024-10-07T11:47:31.335433Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-10-07T11:47:31.351698Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":398}
	{"level":"info","ts":"2024-10-07T11:47:31.353971Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-10-07T11:47:31.362536Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"a14f9258d3b66c75","timeout":"7s"}
	{"level":"info","ts":"2024-10-07T11:47:31.363656Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"a14f9258d3b66c75"}
	{"level":"info","ts":"2024-10-07T11:47:31.364229Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"a14f9258d3b66c75","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-07T11:47:31.364500Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-07T11:47:31.365196Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-07T11:47:31.365275Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-07T11:47:31.365735Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-07T11:47:31.370575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 switched to configuration voters=(11623670073473264757)"}
	{"level":"info","ts":"2024-10-07T11:47:31.370827Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","added-peer-id":"a14f9258d3b66c75","added-peer-peer-urls":["https://192.168.39.196:2380"]}
	{"level":"info","ts":"2024-10-07T11:47:31.377422Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:47:31.377456Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:47:31.388869Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:47:31.404607Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-10-07T11:47:31.404765Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-10-07T11:47:31.404247Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-07T11:47:31.411329Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-07T11:47:31.411285Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"a14f9258d3b66c75","initial-advertise-peer-urls":["https://192.168.39.196:2380"],"listen-peer-urls":["https://192.168.39.196:2380"],"advertise-client-urls":["https://192.168.39.196:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.196:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	
	
	==> kernel <==
	 11:48:00 up 1 min,  0 users,  load average: 1.84, 0.54, 0.19
	Linux kubernetes-upgrade-852078 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1f7d008ff95f44084bc43eaae46c6aadb856f2e63200a3f17d819d3e02762078] <==
	I1007 11:47:55.640210       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1007 11:47:55.643339       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1007 11:47:55.651504       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1007 11:47:55.651657       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1007 11:47:55.651695       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1007 11:47:55.651717       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1007 11:47:55.689210       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1007 11:47:55.689316       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1007 11:47:55.689384       1 policy_source.go:224] refreshing policies
	I1007 11:47:55.715142       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1007 11:47:55.715213       1 aggregator.go:171] initial CRD sync complete...
	I1007 11:47:55.715221       1 autoregister_controller.go:144] Starting autoregister controller
	I1007 11:47:55.715227       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1007 11:47:55.715232       1 cache.go:39] Caches are synced for autoregister controller
	I1007 11:47:55.740677       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1007 11:47:55.741749       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E1007 11:47:55.750477       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1007 11:47:56.554539       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1007 11:47:56.968644       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 11:47:57.557778       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1007 11:47:57.578930       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1007 11:47:57.627844       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 11:47:57.720026       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1007 11:47:57.729419       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1007 11:47:59.138848       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [366d5ffe7a3d9966bf690c9066de027de029803785285a216174394da317749b] <==
	I1007 11:47:31.429189       1 options.go:228] external host was not specified, using 192.168.39.196
	I1007 11:47:31.460725       1 server.go:142] Version: v1.31.1
	I1007 11:47:31.460767       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [3b7c0c701c3e2f69de3bc5d37d0c75aaca72a8977e10b6d9f759d01b7ce267a3] <==
	I1007 11:47:58.984342       1 shared_informer.go:320] Caches are synced for expand
	I1007 11:47:58.984462       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1007 11:47:58.984623       1 shared_informer.go:320] Caches are synced for crt configmap
	I1007 11:47:58.985838       1 shared_informer.go:320] Caches are synced for endpoint
	I1007 11:47:58.999133       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1007 11:47:59.012789       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1007 11:47:59.035339       1 shared_informer.go:320] Caches are synced for taint
	I1007 11:47:59.035425       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1007 11:47:59.035487       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-852078"
	I1007 11:47:59.035516       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1007 11:47:59.050260       1 shared_informer.go:320] Caches are synced for daemon sets
	I1007 11:47:59.051543       1 shared_informer.go:320] Caches are synced for persistent volume
	I1007 11:47:59.086770       1 shared_informer.go:320] Caches are synced for PV protection
	I1007 11:47:59.184178       1 shared_informer.go:320] Caches are synced for attach detach
	I1007 11:47:59.184552       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1007 11:47:59.184696       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.347µs"
	I1007 11:47:59.194924       1 shared_informer.go:320] Caches are synced for resource quota
	I1007 11:47:59.200512       1 shared_informer.go:320] Caches are synced for deployment
	I1007 11:47:59.211923       1 shared_informer.go:320] Caches are synced for resource quota
	I1007 11:47:59.234222       1 shared_informer.go:320] Caches are synced for disruption
	I1007 11:47:59.596254       1 shared_informer.go:320] Caches are synced for garbage collector
	I1007 11:47:59.596296       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1007 11:47:59.633096       1 shared_informer.go:320] Caches are synced for garbage collector
	I1007 11:48:00.022704       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="42.684748ms"
	I1007 11:48:00.022983       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.109µs"
	
	
	==> kube-controller-manager [fc42540b8528bfe207e0c8c44721d89be792934c24ec1e22f889ee5298a3ac97] <==
	
	
	==> kube-proxy [09050343ce53c495ea8ea4919861d0cddb7b0520b06364e40056325c3fb7d0df] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 11:47:57.182483       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 11:47:57.203559       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.196"]
	E1007 11:47:57.203643       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 11:47:57.243367       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 11:47:57.243454       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 11:47:57.243503       1 server_linux.go:169] "Using iptables Proxier"
	I1007 11:47:57.246336       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 11:47:57.246936       1 server.go:483] "Version info" version="v1.31.1"
	I1007 11:47:57.246999       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:47:57.249279       1 config.go:199] "Starting service config controller"
	I1007 11:47:57.249359       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 11:47:57.249436       1 config.go:105] "Starting endpoint slice config controller"
	I1007 11:47:57.249477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 11:47:57.249512       1 config.go:328] "Starting node config controller"
	I1007 11:47:57.249580       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 11:47:57.349939       1 shared_informer.go:320] Caches are synced for node config
	I1007 11:47:57.350225       1 shared_informer.go:320] Caches are synced for service config
	I1007 11:47:57.350310       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [3d96e98772b5c726c2d0562a416d93ff5a7960c90d8af28bc83bcc108935e334] <==
	
	
	==> kube-scheduler [815b53fecd7760ebf311850137704e22eb2eb2159ed567c642b2790717964c26] <==
	W1007 11:47:55.583555       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 11:47:55.588938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:47:55.583779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 11:47:55.590167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:47:55.584272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 11:47:55.601772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:47:55.584519       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 11:47:55.602316       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:47:55.584919       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 11:47:55.655311       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 11:47:55.587300       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 11:47:55.655340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 11:47:55.587518       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 11:47:55.655393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 11:47:55.587702       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 11:47:55.655439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:47:55.587883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 11:47:55.655462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:47:55.588101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 11:47:55.655498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:47:55.588316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 11:47:55.655516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:47:55.588515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 11:47:55.655532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 11:47:55.674904       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9221c394ce59fd9c26f84b5b40c69dfe3cf9c4d24c49263902699a618527c544] <==
	I1007 11:47:31.951464       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Oct 07 11:47:51 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:51.530110    4073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d49ee3a33ceaa034cab7b4a921d8cba8-etcd-certs\") pod \"etcd-kubernetes-upgrade-852078\" (UID: \"d49ee3a33ceaa034cab7b4a921d8cba8\") " pod="kube-system/etcd-kubernetes-upgrade-852078"
	Oct 07 11:47:51 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:51.530224    4073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3da50c92dfe9988ea72577fa07c5395-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-852078\" (UID: \"d3da50c92dfe9988ea72577fa07c5395\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-852078"
	Oct 07 11:47:51 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:51.530331    4073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b46d8ef7e8a3c59993f966f337dc391-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-852078\" (UID: \"6b46d8ef7e8a3c59993f966f337dc391\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-852078"
	Oct 07 11:47:51 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:51.718706    4073 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-852078"
	Oct 07 11:47:51 kubernetes-upgrade-852078 kubelet[4073]: E1007 11:47:51.719618    4073 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.196:8443: connect: connection refused" node="kubernetes-upgrade-852078"
	Oct 07 11:47:51 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:51.783516    4073 scope.go:117] "RemoveContainer" containerID="dd1d2f99b8b01c1e11c5a5df8bbb0bf44934a70d5d7d0609414a416d8a49c072"
	Oct 07 11:47:51 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:51.784843    4073 scope.go:117] "RemoveContainer" containerID="366d5ffe7a3d9966bf690c9066de027de029803785285a216174394da317749b"
	Oct 07 11:47:51 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:51.787096    4073 scope.go:117] "RemoveContainer" containerID="fc42540b8528bfe207e0c8c44721d89be792934c24ec1e22f889ee5298a3ac97"
	Oct 07 11:47:51 kubernetes-upgrade-852078 kubelet[4073]: E1007 11:47:51.939197    4073 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-852078?timeout=10s\": dial tcp 192.168.39.196:8443: connect: connection refused" interval="800ms"
	Oct 07 11:47:52 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:52.121225    4073 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-852078"
	Oct 07 11:47:55 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:55.730538    4073 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-852078"
	Oct 07 11:47:55 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:55.730952    4073 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-852078"
	Oct 07 11:47:55 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:55.731129    4073 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 07 11:47:55 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:55.732159    4073 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 07 11:47:56 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:56.314369    4073 apiserver.go:52] "Watching apiserver"
	Oct 07 11:47:56 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:56.323758    4073 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 07 11:47:56 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:56.342272    4073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/195388ee-ba4b-4150-beec-af13160df9d3-xtables-lock\") pod \"kube-proxy-f86nz\" (UID: \"195388ee-ba4b-4150-beec-af13160df9d3\") " pod="kube-system/kube-proxy-f86nz"
	Oct 07 11:47:56 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:56.342306    4073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/195388ee-ba4b-4150-beec-af13160df9d3-lib-modules\") pod \"kube-proxy-f86nz\" (UID: \"195388ee-ba4b-4150-beec-af13160df9d3\") " pod="kube-system/kube-proxy-f86nz"
	Oct 07 11:47:56 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:56.342414    4073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/51dc59cd-8830-461f-8797-4d846cd3a5cc-tmp\") pod \"storage-provisioner\" (UID: \"51dc59cd-8830-461f-8797-4d846cd3a5cc\") " pod="kube-system/storage-provisioner"
	Oct 07 11:47:56 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:56.621283    4073 scope.go:117] "RemoveContainer" containerID="0da54cb77064d018faef50c732ed2c87b284c4bef13d5c2b47c23b5d08ab1be8"
	Oct 07 11:47:56 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:56.622624    4073 scope.go:117] "RemoveContainer" containerID="b3314f7eaf6a6bf8bbe66e366d0bbd1abc6096c5073a657e6209c1afddbeae2e"
	Oct 07 11:47:56 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:56.622944    4073 scope.go:117] "RemoveContainer" containerID="3d96e98772b5c726c2d0562a416d93ff5a7960c90d8af28bc83bcc108935e334"
	Oct 07 11:47:56 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:56.624319    4073 scope.go:117] "RemoveContainer" containerID="1c30f7056dae3f128d2042adf1c32381f46bc733669633e7542a3d035f40770b"
	Oct 07 11:47:58 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:58.544231    4073 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 07 11:47:59 kubernetes-upgrade-852078 kubelet[4073]: I1007 11:47:59.961307    4073 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [0da54cb77064d018faef50c732ed2c87b284c4bef13d5c2b47c23b5d08ab1be8] <==
	I1007 11:47:49.611862       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1007 11:47:49.614755       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [24d16e7a1642a56a09be9ffff8aa471862b66684f10360b6a5f7361e4bcdf5ac] <==
	I1007 11:47:56.905429       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 11:47:56.952016       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 11:47:56.952131       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 11:47:56.998921       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 11:47:56.999172       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-852078_40b33938-6228-4a16-80fd-288d1be075b8!
	I1007 11:47:57.002189       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"07f7b210-205f-46e6-872d-5b2946091966", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-852078_40b33938-6228-4a16-80fd-288d1be075b8 became leader
	I1007 11:47:57.099959       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-852078_40b33938-6228-4a16-80fd-288d1be075b8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-852078 -n kubernetes-upgrade-852078
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-852078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-852078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-852078
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-852078: (1.10574252s)
--- FAIL: TestKubernetesUpgrade (423.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.85s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-328632 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-328632 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (28.807065514s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-328632] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-328632" primary control-plane node in "pause-328632" cluster
	* Updating the running kvm2 "pause-328632" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-328632" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 11:47:58.235072   56886 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:47:58.235372   56886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:47:58.235384   56886 out.go:358] Setting ErrFile to fd 2...
	I1007 11:47:58.235390   56886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:47:58.235630   56886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 11:47:58.236587   56886 out.go:352] Setting JSON to false
	I1007 11:47:58.237840   56886 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5372,"bootTime":1728296306,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:47:58.237973   56886 start.go:139] virtualization: kvm guest
	I1007 11:47:58.240549   56886 out.go:177] * [pause-328632] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:47:58.242167   56886 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 11:47:58.242180   56886 notify.go:220] Checking for updates...
	I1007 11:47:58.245024   56886 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:47:58.246347   56886 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 11:47:58.247570   56886 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:47:58.248781   56886 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:47:58.250027   56886 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:47:58.251821   56886 config.go:182] Loaded profile config "pause-328632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:47:58.252247   56886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:47:58.252303   56886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:47:58.271936   56886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40041
	I1007 11:47:58.272492   56886 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:47:58.273099   56886 main.go:141] libmachine: Using API Version  1
	I1007 11:47:58.273123   56886 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:47:58.273561   56886 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:47:58.273771   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:47:58.274035   56886 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:47:58.274374   56886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:47:58.274412   56886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:47:58.291571   56886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I1007 11:47:58.292132   56886 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:47:58.292705   56886 main.go:141] libmachine: Using API Version  1
	I1007 11:47:58.292751   56886 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:47:58.293161   56886 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:47:58.293413   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:47:58.385465   56886 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 11:47:58.410004   56886 start.go:297] selected driver: kvm2
	I1007 11:47:58.410029   56886 start.go:901] validating driver "kvm2" against &{Name:pause-328632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:pause-328632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:47:58.410216   56886 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:47:58.410570   56886 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:47:58.410651   56886 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 11:47:58.427375   56886 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 11:47:58.428469   56886 cni.go:84] Creating CNI manager for ""
	I1007 11:47:58.428541   56886 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:47:58.428612   56886 start.go:340] cluster config:
	{Name:pause-328632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-328632 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-al
iases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:47:58.428783   56886 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:47:58.430863   56886 out.go:177] * Starting "pause-328632" primary control-plane node in "pause-328632" cluster
	I1007 11:47:58.432522   56886 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:47:58.432571   56886 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 11:47:58.432585   56886 cache.go:56] Caching tarball of preloaded images
	I1007 11:47:58.432679   56886 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 11:47:58.432692   56886 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:47:58.432865   56886 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/config.json ...
	I1007 11:47:58.433089   56886 start.go:360] acquireMachinesLock for pause-328632: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 11:47:58.433143   56886 start.go:364] duration metric: took 32.907µs to acquireMachinesLock for "pause-328632"
	I1007 11:47:58.433163   56886 start.go:96] Skipping create...Using existing machine configuration
	I1007 11:47:58.433171   56886 fix.go:54] fixHost starting: 
	I1007 11:47:58.433478   56886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:47:58.433516   56886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:47:58.451125   56886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I1007 11:47:58.451707   56886 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:47:58.452298   56886 main.go:141] libmachine: Using API Version  1
	I1007 11:47:58.452327   56886 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:47:58.452654   56886 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:47:58.452862   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:47:58.453041   56886 main.go:141] libmachine: (pause-328632) Calling .GetState
	I1007 11:47:58.454743   56886 fix.go:112] recreateIfNeeded on pause-328632: state=Running err=<nil>
	W1007 11:47:58.454779   56886 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 11:47:58.457879   56886 out.go:177] * Updating the running kvm2 "pause-328632" VM ...
	I1007 11:47:58.459130   56886 machine.go:93] provisionDockerMachine start ...
	I1007 11:47:58.459159   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:47:58.459385   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:58.462380   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.462815   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.462841   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.463002   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:58.463155   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.463317   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.463462   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:58.463633   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:47:58.463913   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:47:58.463928   56886 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 11:47:58.585245   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-328632
	
	I1007 11:47:58.585276   56886 main.go:141] libmachine: (pause-328632) Calling .GetMachineName
	I1007 11:47:58.585498   56886 buildroot.go:166] provisioning hostname "pause-328632"
	I1007 11:47:58.585535   56886 main.go:141] libmachine: (pause-328632) Calling .GetMachineName
	I1007 11:47:58.585749   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:58.588898   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.589360   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.589411   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.589692   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:58.589881   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.590021   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.590133   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:58.590304   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:47:58.590512   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:47:58.590529   56886 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-328632 && echo "pause-328632" | sudo tee /etc/hostname
	I1007 11:47:58.730940   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-328632
	
	I1007 11:47:58.730972   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:58.733998   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.734363   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.734392   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.734586   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:58.734799   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.734960   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.735110   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:58.735291   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:47:58.735471   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:47:58.735492   56886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-328632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-328632/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-328632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:47:58.855403   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:47:58.855436   56886 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 11:47:58.855457   56886 buildroot.go:174] setting up certificates
	I1007 11:47:58.855470   56886 provision.go:84] configureAuth start
	I1007 11:47:58.855482   56886 main.go:141] libmachine: (pause-328632) Calling .GetMachineName
	I1007 11:47:58.855768   56886 main.go:141] libmachine: (pause-328632) Calling .GetIP
	I1007 11:47:58.859054   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.859582   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.859618   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.859791   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:58.862799   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.863242   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.863267   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.863425   56886 provision.go:143] copyHostCerts
	I1007 11:47:58.863479   56886 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 11:47:58.863499   56886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 11:47:58.863576   56886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 11:47:58.863682   56886 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 11:47:58.863690   56886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 11:47:58.863719   56886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 11:47:58.863798   56886 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 11:47:58.863807   56886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 11:47:58.863836   56886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 11:47:58.863913   56886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.pause-328632 san=[127.0.0.1 192.168.72.219 localhost minikube pause-328632]
	I1007 11:47:59.190511   56886 provision.go:177] copyRemoteCerts
	I1007 11:47:59.190569   56886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:47:59.190591   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:59.193770   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:59.194175   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:59.194233   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:59.194420   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:59.194618   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:59.194781   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:59.194934   56886 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/pause-328632/id_rsa Username:docker}
	I1007 11:47:59.284938   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 11:47:59.317674   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 11:47:59.349795   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 11:47:59.378706   56886 provision.go:87] duration metric: took 523.223734ms to configureAuth
	I1007 11:47:59.378736   56886 buildroot.go:189] setting minikube options for container-runtime
	I1007 11:47:59.378948   56886 config.go:182] Loaded profile config "pause-328632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:47:59.379014   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:59.381971   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:59.382309   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:59.382338   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:59.382533   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:59.382694   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:59.382871   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:59.383006   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:59.383162   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:47:59.383309   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:47:59.383324   56886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:48:04.915170   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:48:04.915204   56886 machine.go:96] duration metric: took 6.456056821s to provisionDockerMachine
	I1007 11:48:04.915218   56886 start.go:293] postStartSetup for "pause-328632" (driver="kvm2")
	I1007 11:48:04.915231   56886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:48:04.915255   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:04.915580   56886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:48:04.915614   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:48:04.918516   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:04.918982   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:04.919013   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:04.919154   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:48:04.919364   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:04.919506   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:48:04.919647   56886 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/pause-328632/id_rsa Username:docker}
	I1007 11:48:05.007137   56886 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:48:05.011837   56886 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 11:48:05.011861   56886 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 11:48:05.011931   56886 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 11:48:05.012053   56886 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 11:48:05.012141   56886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 11:48:05.021937   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 11:48:05.048629   56886 start.go:296] duration metric: took 133.395142ms for postStartSetup
	I1007 11:48:05.048673   56886 fix.go:56] duration metric: took 6.615502714s for fixHost
	I1007 11:48:05.048697   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:48:05.051614   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.051954   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:05.052005   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.052164   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:48:05.052337   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:05.052472   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:05.052624   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:48:05.052813   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:48:05.052984   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:48:05.053019   56886 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 11:48:05.165216   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728301685.154599192
	
	I1007 11:48:05.165238   56886 fix.go:216] guest clock: 1728301685.154599192
	I1007 11:48:05.165246   56886 fix.go:229] Guest: 2024-10-07 11:48:05.154599192 +0000 UTC Remote: 2024-10-07 11:48:05.048678627 +0000 UTC m=+6.863515987 (delta=105.920565ms)
	I1007 11:48:05.165289   56886 fix.go:200] guest clock delta is within tolerance: 105.920565ms
	I1007 11:48:05.165293   56886 start.go:83] releasing machines lock for "pause-328632", held for 6.732139355s
	I1007 11:48:05.165319   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:05.165595   56886 main.go:141] libmachine: (pause-328632) Calling .GetIP
	I1007 11:48:05.169272   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.169647   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:05.169671   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.169882   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:05.170455   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:05.170627   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:05.170702   56886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:48:05.170753   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:48:05.170859   56886 ssh_runner.go:195] Run: cat /version.json
	I1007 11:48:05.170886   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:48:05.173964   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.174057   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.174320   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:05.174343   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.174487   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:48:05.174587   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:05.174617   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.174685   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:05.174774   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:48:05.174845   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:48:05.174908   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:05.174966   56886 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/pause-328632/id_rsa Username:docker}
	I1007 11:48:05.174998   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:48:05.175091   56886 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/pause-328632/id_rsa Username:docker}
	I1007 11:48:05.283569   56886 ssh_runner.go:195] Run: systemctl --version
	I1007 11:48:05.290655   56886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:48:05.456517   56886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 11:48:05.464780   56886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 11:48:05.464851   56886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:48:05.476708   56886 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 11:48:05.476736   56886 start.go:495] detecting cgroup driver to use...
	I1007 11:48:05.476820   56886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:48:05.494023   56886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:48:05.510117   56886 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:48:05.510197   56886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:48:05.525350   56886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:48:05.540104   56886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:48:05.688555   56886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:48:05.827204   56886 docker.go:233] disabling docker service ...
	I1007 11:48:05.827279   56886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:48:05.847161   56886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:48:05.863040   56886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:48:05.994082   56886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:48:06.124692   56886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:48:06.141638   56886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:48:06.164908   56886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:48:06.164978   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.176269   56886 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:48:06.176338   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.186959   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.198757   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.210499   56886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:48:06.222580   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.233927   56886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.246494   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.259303   56886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:48:06.270953   56886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:48:06.282702   56886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:48:06.416347   56886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:48:06.642267   56886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:48:06.642331   56886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:48:06.648296   56886 start.go:563] Will wait 60s for crictl version
	I1007 11:48:06.648352   56886 ssh_runner.go:195] Run: which crictl
	I1007 11:48:06.652658   56886 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:48:06.703409   56886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 11:48:06.703487   56886 ssh_runner.go:195] Run: crio --version
	I1007 11:48:06.738106   56886 ssh_runner.go:195] Run: crio --version
	I1007 11:48:06.774650   56886 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 11:48:06.775876   56886 main.go:141] libmachine: (pause-328632) Calling .GetIP
	I1007 11:48:06.779335   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:06.779816   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:06.779837   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:06.780141   56886 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1007 11:48:06.785713   56886 kubeadm.go:883] updating cluster {Name:pause-328632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:pause-328632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:48:06.785882   56886 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:48:06.785949   56886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:48:06.845738   56886 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:48:06.845760   56886 crio.go:433] Images already preloaded, skipping extraction
	I1007 11:48:06.845812   56886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:48:06.889201   56886 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:48:06.889224   56886 cache_images.go:84] Images are preloaded, skipping loading
	I1007 11:48:06.889233   56886 kubeadm.go:934] updating node { 192.168.72.219 8443 v1.31.1 crio true true} ...
	I1007 11:48:06.889338   56886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-328632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-328632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:48:06.889427   56886 ssh_runner.go:195] Run: crio config
	I1007 11:48:06.951385   56886 cni.go:84] Creating CNI manager for ""
	I1007 11:48:06.951406   56886 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:48:06.951418   56886 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:48:06.951445   56886 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.219 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-328632 NodeName:pause-328632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 11:48:06.951618   56886 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.219
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-328632"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.219
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.219"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:48:06.951680   56886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:48:06.963463   56886 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:48:06.963536   56886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 11:48:06.975461   56886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 11:48:06.993965   56886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:48:07.012739   56886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I1007 11:48:07.031526   56886 ssh_runner.go:195] Run: grep 192.168.72.219	control-plane.minikube.internal$ /etc/hosts
	I1007 11:48:07.035758   56886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:48:07.202226   56886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:48:07.218569   56886 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632 for IP: 192.168.72.219
	I1007 11:48:07.218591   56886 certs.go:194] generating shared ca certs ...
	I1007 11:48:07.218605   56886 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:48:07.218777   56886 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 11:48:07.218834   56886 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 11:48:07.218848   56886 certs.go:256] generating profile certs ...
	I1007 11:48:07.218958   56886 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/client.key
	I1007 11:48:07.219027   56886 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/apiserver.key.dd135421
	I1007 11:48:07.219089   56886 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/proxy-client.key
	I1007 11:48:07.219224   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 11:48:07.219258   56886 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 11:48:07.219271   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 11:48:07.219308   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 11:48:07.219336   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:48:07.219367   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 11:48:07.219423   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 11:48:07.220081   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:48:07.249790   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 11:48:07.278078   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:48:07.313071   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 11:48:07.342821   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 11:48:07.369334   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 11:48:07.396167   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:48:07.427370   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 11:48:07.457219   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 11:48:07.488520   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:48:07.516753   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 11:48:07.543254   56886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:48:07.563436   56886 ssh_runner.go:195] Run: openssl version
	I1007 11:48:07.570335   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 11:48:07.582600   56886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 11:48:07.587333   56886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 11:48:07.587396   56886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 11:48:07.593384   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 11:48:07.603977   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:48:07.616357   56886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:48:07.621194   56886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:48:07.621270   56886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:48:07.627462   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:48:07.640370   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 11:48:07.653613   56886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 11:48:07.658792   56886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 11:48:07.658865   56886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 11:48:07.665411   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 11:48:07.678525   56886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:48:07.683638   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 11:48:07.690137   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 11:48:07.696834   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 11:48:07.703414   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 11:48:07.710055   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 11:48:07.716619   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 11:48:07.722594   56886 kubeadm.go:392] StartCluster: {Name:pause-328632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:pause-328632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:48:07.722710   56886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 11:48:07.722764   56886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:48:07.923553   56886 cri.go:89] found id: "949e59046784c43201c4d9b447e8274967d2d64d08262b082084f7f81283274f"
	I1007 11:48:07.923583   56886 cri.go:89] found id: "34826c9a89b28965c296398aeec406d6c08dabe388de546f079ad262827a51b6"
	I1007 11:48:07.923588   56886 cri.go:89] found id: "97cadc0102ef22d68c6dfac6e8da330cb407ecdf72db6b70abf8d78d8b8d744c"
	I1007 11:48:07.923592   56886 cri.go:89] found id: "ed3fc62150c6cc0a757f8662ff2ad489307b473010c76c7e4cf39b46fcdc0ab6"
	I1007 11:48:07.923597   56886 cri.go:89] found id: "8738c004ac7649299b52691648c8ddd5b8c96190044c2155d113061ba85992a1"
	I1007 11:48:07.923601   56886 cri.go:89] found id: "013a9b10ece36ffe03232b8539a92f6679b62aa4c5570ff41898ae0975a779d7"
	I1007 11:48:07.923605   56886 cri.go:89] found id: ""
	I1007 11:48:07.923657   56886 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-328632 -n pause-328632
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-328632 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-328632 logs -n 25: (1.379789804s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-167819 sudo                 | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-167819 sudo                 | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-167819 sudo find            | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-167819 sudo crio            | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-167819                      | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC | 07 Oct 24 11:44 UTC |
	| start   | -p cert-expiration-658191             | cert-expiration-658191    | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC | 07 Oct 24 11:45 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-056919             | running-upgrade-056919    | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC | 07 Oct 24 11:44 UTC |
	| start   | -p force-systemd-flag-468078          | force-systemd-flag-468078 | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC | 07 Oct 24 11:46 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-264062           | force-systemd-env-264062  | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:45 UTC |
	| start   | -p cert-options-495675                | cert-options-495675       | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:46 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-852078          | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:45 UTC |
	| start   | -p kubernetes-upgrade-852078          | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:46 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-468078 ssh cat     | force-systemd-flag-468078 | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-468078          | force-systemd-flag-468078 | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	| start   | -p pause-328632 --memory=2048         | pause-328632              | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:47 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-495675 ssh               | cert-options-495675       | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-495675 -- sudo        | cert-options-495675       | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-495675                | cert-options-495675       | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	| start   | -p auto-167819 --memory=3072          | auto-167819               | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:48 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-852078          | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-852078          | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:47 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-328632                       | pause-328632              | jenkins | v1.34.0 | 07 Oct 24 11:47 UTC | 07 Oct 24 11:48 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-852078          | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:48 UTC | 07 Oct 24 11:48 UTC |
	| start   | -p kindnet-167819                     | kindnet-167819            | jenkins | v1.34.0 | 07 Oct 24 11:48 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-167819 pgrep -a               | auto-167819               | jenkins | v1.34.0 | 07 Oct 24 11:48 UTC | 07 Oct 24 11:48 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:48:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:48:02.471921   57110 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:48:02.472087   57110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:48:02.472099   57110 out.go:358] Setting ErrFile to fd 2...
	I1007 11:48:02.472105   57110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:48:02.472299   57110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 11:48:02.472916   57110 out.go:352] Setting JSON to false
	I1007 11:48:02.473828   57110 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5376,"bootTime":1728296306,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:48:02.473937   57110 start.go:139] virtualization: kvm guest
	I1007 11:48:02.476582   57110 out.go:177] * [kindnet-167819] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:48:02.477962   57110 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 11:48:02.477962   57110 notify.go:220] Checking for updates...
	I1007 11:48:02.479398   57110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:48:02.480962   57110 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 11:48:02.482873   57110 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:48:02.484197   57110 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:48:02.485735   57110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:48:02.487627   57110 config.go:182] Loaded profile config "auto-167819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:48:02.487764   57110 config.go:182] Loaded profile config "cert-expiration-658191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:48:02.487888   57110 config.go:182] Loaded profile config "pause-328632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:48:02.487964   57110 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:48:02.528021   57110 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 11:48:02.529198   57110 start.go:297] selected driver: kvm2
	I1007 11:48:02.529213   57110 start.go:901] validating driver "kvm2" against <nil>
	I1007 11:48:02.529227   57110 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:48:02.530053   57110 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:48:02.530148   57110 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 11:48:02.545967   57110 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 11:48:02.546009   57110 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:48:02.546250   57110 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:48:02.546284   57110 cni.go:84] Creating CNI manager for "kindnet"
	I1007 11:48:02.546293   57110 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 11:48:02.546336   57110 start.go:340] cluster config:
	{Name:kindnet-167819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-167819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:48:02.546429   57110 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:48:02.548264   57110 out.go:177] * Starting "kindnet-167819" primary control-plane node in "kindnet-167819" cluster
	I1007 11:47:58.459130   56886 machine.go:93] provisionDockerMachine start ...
	I1007 11:47:58.459159   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:47:58.459385   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:58.462380   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.462815   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.462841   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.463002   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:58.463155   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.463317   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.463462   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:58.463633   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:47:58.463913   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:47:58.463928   56886 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 11:47:58.585245   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-328632
	
	I1007 11:47:58.585276   56886 main.go:141] libmachine: (pause-328632) Calling .GetMachineName
	I1007 11:47:58.585498   56886 buildroot.go:166] provisioning hostname "pause-328632"
	I1007 11:47:58.585535   56886 main.go:141] libmachine: (pause-328632) Calling .GetMachineName
	I1007 11:47:58.585749   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:58.588898   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.589360   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.589411   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.589692   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:58.589881   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.590021   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.590133   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:58.590304   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:47:58.590512   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:47:58.590529   56886 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-328632 && echo "pause-328632" | sudo tee /etc/hostname
	I1007 11:47:58.730940   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-328632
	
	I1007 11:47:58.730972   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:58.733998   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.734363   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.734392   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.734586   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:58.734799   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.734960   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.735110   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:58.735291   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:47:58.735471   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:47:58.735492   56886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-328632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-328632/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-328632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:47:58.855403   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:47:58.855436   56886 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 11:47:58.855457   56886 buildroot.go:174] setting up certificates
	I1007 11:47:58.855470   56886 provision.go:84] configureAuth start
	I1007 11:47:58.855482   56886 main.go:141] libmachine: (pause-328632) Calling .GetMachineName
	I1007 11:47:58.855768   56886 main.go:141] libmachine: (pause-328632) Calling .GetIP
	I1007 11:47:58.859054   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.859582   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.859618   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.859791   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:58.862799   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.863242   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.863267   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.863425   56886 provision.go:143] copyHostCerts
	I1007 11:47:58.863479   56886 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 11:47:58.863499   56886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 11:47:58.863576   56886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 11:47:58.863682   56886 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 11:47:58.863690   56886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 11:47:58.863719   56886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 11:47:58.863798   56886 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 11:47:58.863807   56886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 11:47:58.863836   56886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 11:47:58.863913   56886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.pause-328632 san=[127.0.0.1 192.168.72.219 localhost minikube pause-328632]
	I1007 11:47:59.190511   56886 provision.go:177] copyRemoteCerts
	I1007 11:47:59.190569   56886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:47:59.190591   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:59.193770   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:59.194175   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:59.194233   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:59.194420   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:59.194618   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:59.194781   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:59.194934   56886 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/pause-328632/id_rsa Username:docker}
	I1007 11:47:59.284938   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 11:47:59.317674   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 11:47:59.349795   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 11:47:59.378706   56886 provision.go:87] duration metric: took 523.223734ms to configureAuth
	I1007 11:47:59.378736   56886 buildroot.go:189] setting minikube options for container-runtime
	I1007 11:47:59.378948   56886 config.go:182] Loaded profile config "pause-328632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:47:59.379014   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:59.381971   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:59.382309   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:59.382338   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:59.382533   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:59.382694   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:59.382871   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:59.383006   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:59.383162   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:47:59.383309   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:47:59.383324   56886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:47:59.825659   56038 pod_ready.go:103] pod "coredns-7c65d6cfc9-4wtfv" in "kube-system" namespace has status "Ready":"False"
	I1007 11:48:02.325057   56038 pod_ready.go:103] pod "coredns-7c65d6cfc9-4wtfv" in "kube-system" namespace has status "Ready":"False"
	I1007 11:48:02.549476   57110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:48:02.549511   57110 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 11:48:02.549518   57110 cache.go:56] Caching tarball of preloaded images
	I1007 11:48:02.549595   57110 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 11:48:02.549607   57110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:48:02.549715   57110 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kindnet-167819/config.json ...
	I1007 11:48:02.549734   57110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kindnet-167819/config.json: {Name:mk3772bf1d8e7c3dfddf1e4e448acf7d973f76ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:48:02.549898   57110 start.go:360] acquireMachinesLock for kindnet-167819: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 11:48:05.165391   57110 start.go:364] duration metric: took 2.615457037s to acquireMachinesLock for "kindnet-167819"
	I1007 11:48:05.165451   57110 start.go:93] Provisioning new machine with config: &{Name:kindnet-167819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-167819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:48:05.165595   57110 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 11:48:04.915170   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:48:04.915204   56886 machine.go:96] duration metric: took 6.456056821s to provisionDockerMachine
	I1007 11:48:04.915218   56886 start.go:293] postStartSetup for "pause-328632" (driver="kvm2")
	I1007 11:48:04.915231   56886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:48:04.915255   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:04.915580   56886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:48:04.915614   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:48:04.918516   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:04.918982   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:04.919013   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:04.919154   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:48:04.919364   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:04.919506   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:48:04.919647   56886 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/pause-328632/id_rsa Username:docker}
	I1007 11:48:05.007137   56886 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:48:05.011837   56886 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 11:48:05.011861   56886 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 11:48:05.011931   56886 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 11:48:05.012053   56886 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 11:48:05.012141   56886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 11:48:05.021937   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 11:48:05.048629   56886 start.go:296] duration metric: took 133.395142ms for postStartSetup
	I1007 11:48:05.048673   56886 fix.go:56] duration metric: took 6.615502714s for fixHost
	I1007 11:48:05.048697   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:48:05.051614   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.051954   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:05.052005   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.052164   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:48:05.052337   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:05.052472   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:05.052624   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:48:05.052813   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:48:05.052984   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:48:05.053019   56886 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 11:48:05.165216   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728301685.154599192
	
	I1007 11:48:05.165238   56886 fix.go:216] guest clock: 1728301685.154599192
	I1007 11:48:05.165246   56886 fix.go:229] Guest: 2024-10-07 11:48:05.154599192 +0000 UTC Remote: 2024-10-07 11:48:05.048678627 +0000 UTC m=+6.863515987 (delta=105.920565ms)
	I1007 11:48:05.165289   56886 fix.go:200] guest clock delta is within tolerance: 105.920565ms
	I1007 11:48:05.165293   56886 start.go:83] releasing machines lock for "pause-328632", held for 6.732139355s
	I1007 11:48:05.165319   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:05.165595   56886 main.go:141] libmachine: (pause-328632) Calling .GetIP
	I1007 11:48:05.169272   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.169647   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:05.169671   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.169882   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:05.170455   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:05.170627   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:05.170702   56886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:48:05.170753   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:48:05.170859   56886 ssh_runner.go:195] Run: cat /version.json
	I1007 11:48:05.170886   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:48:05.173964   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.174057   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.174320   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:05.174343   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.174487   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:48:05.174587   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:05.174617   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.174685   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:05.174774   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:48:05.174845   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:48:05.174908   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:05.174966   56886 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/pause-328632/id_rsa Username:docker}
	I1007 11:48:05.174998   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:48:05.175091   56886 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/pause-328632/id_rsa Username:docker}
	I1007 11:48:05.283569   56886 ssh_runner.go:195] Run: systemctl --version
	I1007 11:48:05.290655   56886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:48:05.456517   56886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 11:48:05.464780   56886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 11:48:05.464851   56886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:48:05.476708   56886 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 11:48:05.476736   56886 start.go:495] detecting cgroup driver to use...
	I1007 11:48:05.476820   56886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:48:05.494023   56886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:48:05.510117   56886 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:48:05.510197   56886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:48:05.525350   56886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:48:05.540104   56886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:48:05.688555   56886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:48:05.827204   56886 docker.go:233] disabling docker service ...
	I1007 11:48:05.827279   56886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:48:05.847161   56886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:48:05.863040   56886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:48:05.994082   56886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:48:06.124692   56886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:48:06.141638   56886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:48:06.164908   56886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:48:06.164978   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.176269   56886 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:48:06.176338   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.186959   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.198757   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.210499   56886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:48:06.222580   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.233927   56886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.246494   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.259303   56886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:48:06.270953   56886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:48:06.282702   56886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:48:06.416347   56886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:48:06.642267   56886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:48:06.642331   56886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:48:06.648296   56886 start.go:563] Will wait 60s for crictl version
	I1007 11:48:06.648352   56886 ssh_runner.go:195] Run: which crictl
	I1007 11:48:06.652658   56886 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:48:06.703409   56886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 11:48:06.703487   56886 ssh_runner.go:195] Run: crio --version
	I1007 11:48:06.738106   56886 ssh_runner.go:195] Run: crio --version
	I1007 11:48:06.774650   56886 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 11:48:05.167712   57110 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 11:48:05.167921   57110 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:48:05.168004   57110 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:48:05.188955   57110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I1007 11:48:05.189466   57110 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:48:05.190016   57110 main.go:141] libmachine: Using API Version  1
	I1007 11:48:05.190053   57110 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:48:05.190442   57110 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:48:05.190655   57110 main.go:141] libmachine: (kindnet-167819) Calling .GetMachineName
	I1007 11:48:05.190810   57110 main.go:141] libmachine: (kindnet-167819) Calling .DriverName
	I1007 11:48:05.191033   57110 start.go:159] libmachine.API.Create for "kindnet-167819" (driver="kvm2")
	I1007 11:48:05.191071   57110 client.go:168] LocalClient.Create starting
	I1007 11:48:05.191117   57110 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 11:48:05.191166   57110 main.go:141] libmachine: Decoding PEM data...
	I1007 11:48:05.191190   57110 main.go:141] libmachine: Parsing certificate...
	I1007 11:48:05.191261   57110 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 11:48:05.191290   57110 main.go:141] libmachine: Decoding PEM data...
	I1007 11:48:05.191311   57110 main.go:141] libmachine: Parsing certificate...
	I1007 11:48:05.191336   57110 main.go:141] libmachine: Running pre-create checks...
	I1007 11:48:05.191349   57110 main.go:141] libmachine: (kindnet-167819) Calling .PreCreateCheck
	I1007 11:48:05.191733   57110 main.go:141] libmachine: (kindnet-167819) Calling .GetConfigRaw
	I1007 11:48:05.192189   57110 main.go:141] libmachine: Creating machine...
	I1007 11:48:05.192205   57110 main.go:141] libmachine: (kindnet-167819) Calling .Create
	I1007 11:48:05.192326   57110 main.go:141] libmachine: (kindnet-167819) Creating KVM machine...
	I1007 11:48:05.193774   57110 main.go:141] libmachine: (kindnet-167819) DBG | found existing default KVM network
	I1007 11:48:05.195597   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:05.195422   57149 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000300090}
	I1007 11:48:05.195667   57110 main.go:141] libmachine: (kindnet-167819) DBG | created network xml: 
	I1007 11:48:05.195688   57110 main.go:141] libmachine: (kindnet-167819) DBG | <network>
	I1007 11:48:05.195699   57110 main.go:141] libmachine: (kindnet-167819) DBG |   <name>mk-kindnet-167819</name>
	I1007 11:48:05.195710   57110 main.go:141] libmachine: (kindnet-167819) DBG |   <dns enable='no'/>
	I1007 11:48:05.195721   57110 main.go:141] libmachine: (kindnet-167819) DBG |   
	I1007 11:48:05.195734   57110 main.go:141] libmachine: (kindnet-167819) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 11:48:05.195746   57110 main.go:141] libmachine: (kindnet-167819) DBG |     <dhcp>
	I1007 11:48:05.195755   57110 main.go:141] libmachine: (kindnet-167819) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 11:48:05.195766   57110 main.go:141] libmachine: (kindnet-167819) DBG |     </dhcp>
	I1007 11:48:05.195773   57110 main.go:141] libmachine: (kindnet-167819) DBG |   </ip>
	I1007 11:48:05.195783   57110 main.go:141] libmachine: (kindnet-167819) DBG |   
	I1007 11:48:05.195794   57110 main.go:141] libmachine: (kindnet-167819) DBG | </network>
	I1007 11:48:05.195825   57110 main.go:141] libmachine: (kindnet-167819) DBG | 
	I1007 11:48:05.201794   57110 main.go:141] libmachine: (kindnet-167819) DBG | trying to create private KVM network mk-kindnet-167819 192.168.39.0/24...
	I1007 11:48:05.277661   57110 main.go:141] libmachine: (kindnet-167819) DBG | private KVM network mk-kindnet-167819 192.168.39.0/24 created
	I1007 11:48:05.277694   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:05.277622   57149 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:48:05.277749   57110 main.go:141] libmachine: (kindnet-167819) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819 ...
	I1007 11:48:05.277791   57110 main.go:141] libmachine: (kindnet-167819) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 11:48:05.277831   57110 main.go:141] libmachine: (kindnet-167819) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 11:48:05.545703   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:05.545611   57149 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819/id_rsa...
	I1007 11:48:05.632204   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:05.632065   57149 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819/kindnet-167819.rawdisk...
	I1007 11:48:05.632248   57110 main.go:141] libmachine: (kindnet-167819) DBG | Writing magic tar header
	I1007 11:48:05.632267   57110 main.go:141] libmachine: (kindnet-167819) DBG | Writing SSH key tar header
	I1007 11:48:05.632293   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:05.632238   57149 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819 ...
	I1007 11:48:05.632520   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819
	I1007 11:48:05.632570   57110 main.go:141] libmachine: (kindnet-167819) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819 (perms=drwx------)
	I1007 11:48:05.632590   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 11:48:05.632613   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:48:05.632625   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 11:48:05.632641   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 11:48:05.632648   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home/jenkins
	I1007 11:48:05.632657   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home
	I1007 11:48:05.632667   57110 main.go:141] libmachine: (kindnet-167819) DBG | Skipping /home - not owner
	I1007 11:48:05.632678   57110 main.go:141] libmachine: (kindnet-167819) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 11:48:05.632692   57110 main.go:141] libmachine: (kindnet-167819) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 11:48:05.632709   57110 main.go:141] libmachine: (kindnet-167819) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 11:48:05.632739   57110 main.go:141] libmachine: (kindnet-167819) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 11:48:05.632775   57110 main.go:141] libmachine: (kindnet-167819) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 11:48:05.632822   57110 main.go:141] libmachine: (kindnet-167819) Creating domain...
	I1007 11:48:05.633758   57110 main.go:141] libmachine: (kindnet-167819) define libvirt domain using xml: 
	I1007 11:48:05.633781   57110 main.go:141] libmachine: (kindnet-167819) <domain type='kvm'>
	I1007 11:48:05.633802   57110 main.go:141] libmachine: (kindnet-167819)   <name>kindnet-167819</name>
	I1007 11:48:05.633822   57110 main.go:141] libmachine: (kindnet-167819)   <memory unit='MiB'>3072</memory>
	I1007 11:48:05.633831   57110 main.go:141] libmachine: (kindnet-167819)   <vcpu>2</vcpu>
	I1007 11:48:05.633841   57110 main.go:141] libmachine: (kindnet-167819)   <features>
	I1007 11:48:05.633850   57110 main.go:141] libmachine: (kindnet-167819)     <acpi/>
	I1007 11:48:05.633860   57110 main.go:141] libmachine: (kindnet-167819)     <apic/>
	I1007 11:48:05.633867   57110 main.go:141] libmachine: (kindnet-167819)     <pae/>
	I1007 11:48:05.633875   57110 main.go:141] libmachine: (kindnet-167819)     
	I1007 11:48:05.633890   57110 main.go:141] libmachine: (kindnet-167819)   </features>
	I1007 11:48:05.633900   57110 main.go:141] libmachine: (kindnet-167819)   <cpu mode='host-passthrough'>
	I1007 11:48:05.633907   57110 main.go:141] libmachine: (kindnet-167819)   
	I1007 11:48:05.633916   57110 main.go:141] libmachine: (kindnet-167819)   </cpu>
	I1007 11:48:05.633924   57110 main.go:141] libmachine: (kindnet-167819)   <os>
	I1007 11:48:05.633931   57110 main.go:141] libmachine: (kindnet-167819)     <type>hvm</type>
	I1007 11:48:05.633942   57110 main.go:141] libmachine: (kindnet-167819)     <boot dev='cdrom'/>
	I1007 11:48:05.633951   57110 main.go:141] libmachine: (kindnet-167819)     <boot dev='hd'/>
	I1007 11:48:05.633962   57110 main.go:141] libmachine: (kindnet-167819)     <bootmenu enable='no'/>
	I1007 11:48:05.633969   57110 main.go:141] libmachine: (kindnet-167819)   </os>
	I1007 11:48:05.633978   57110 main.go:141] libmachine: (kindnet-167819)   <devices>
	I1007 11:48:05.633984   57110 main.go:141] libmachine: (kindnet-167819)     <disk type='file' device='cdrom'>
	I1007 11:48:05.634005   57110 main.go:141] libmachine: (kindnet-167819)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819/boot2docker.iso'/>
	I1007 11:48:05.634014   57110 main.go:141] libmachine: (kindnet-167819)       <target dev='hdc' bus='scsi'/>
	I1007 11:48:05.634021   57110 main.go:141] libmachine: (kindnet-167819)       <readonly/>
	I1007 11:48:05.634029   57110 main.go:141] libmachine: (kindnet-167819)     </disk>
	I1007 11:48:05.634037   57110 main.go:141] libmachine: (kindnet-167819)     <disk type='file' device='disk'>
	I1007 11:48:05.634048   57110 main.go:141] libmachine: (kindnet-167819)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 11:48:05.634072   57110 main.go:141] libmachine: (kindnet-167819)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819/kindnet-167819.rawdisk'/>
	I1007 11:48:05.634094   57110 main.go:141] libmachine: (kindnet-167819)       <target dev='hda' bus='virtio'/>
	I1007 11:48:05.634104   57110 main.go:141] libmachine: (kindnet-167819)     </disk>
	I1007 11:48:05.634117   57110 main.go:141] libmachine: (kindnet-167819)     <interface type='network'>
	I1007 11:48:05.634129   57110 main.go:141] libmachine: (kindnet-167819)       <source network='mk-kindnet-167819'/>
	I1007 11:48:05.634146   57110 main.go:141] libmachine: (kindnet-167819)       <model type='virtio'/>
	I1007 11:48:05.634159   57110 main.go:141] libmachine: (kindnet-167819)     </interface>
	I1007 11:48:05.634172   57110 main.go:141] libmachine: (kindnet-167819)     <interface type='network'>
	I1007 11:48:05.634215   57110 main.go:141] libmachine: (kindnet-167819)       <source network='default'/>
	I1007 11:48:05.634224   57110 main.go:141] libmachine: (kindnet-167819)       <model type='virtio'/>
	I1007 11:48:05.634241   57110 main.go:141] libmachine: (kindnet-167819)     </interface>
	I1007 11:48:05.634253   57110 main.go:141] libmachine: (kindnet-167819)     <serial type='pty'>
	I1007 11:48:05.634267   57110 main.go:141] libmachine: (kindnet-167819)       <target port='0'/>
	I1007 11:48:05.634278   57110 main.go:141] libmachine: (kindnet-167819)     </serial>
	I1007 11:48:05.634289   57110 main.go:141] libmachine: (kindnet-167819)     <console type='pty'>
	I1007 11:48:05.634306   57110 main.go:141] libmachine: (kindnet-167819)       <target type='serial' port='0'/>
	I1007 11:48:05.634319   57110 main.go:141] libmachine: (kindnet-167819)     </console>
	I1007 11:48:05.634328   57110 main.go:141] libmachine: (kindnet-167819)     <rng model='virtio'>
	I1007 11:48:05.634342   57110 main.go:141] libmachine: (kindnet-167819)       <backend model='random'>/dev/random</backend>
	I1007 11:48:05.634354   57110 main.go:141] libmachine: (kindnet-167819)     </rng>
	I1007 11:48:05.634374   57110 main.go:141] libmachine: (kindnet-167819)     
	I1007 11:48:05.634391   57110 main.go:141] libmachine: (kindnet-167819)     
	I1007 11:48:05.634402   57110 main.go:141] libmachine: (kindnet-167819)   </devices>
	I1007 11:48:05.634411   57110 main.go:141] libmachine: (kindnet-167819) </domain>
	I1007 11:48:05.634426   57110 main.go:141] libmachine: (kindnet-167819) 
	I1007 11:48:05.638977   57110 main.go:141] libmachine: (kindnet-167819) DBG | domain kindnet-167819 has defined MAC address 52:54:00:f9:43:d1 in network default
	I1007 11:48:05.639669   57110 main.go:141] libmachine: (kindnet-167819) Ensuring networks are active...
	I1007 11:48:05.639693   57110 main.go:141] libmachine: (kindnet-167819) DBG | domain kindnet-167819 has defined MAC address 52:54:00:b8:50:c8 in network mk-kindnet-167819
	I1007 11:48:05.640448   57110 main.go:141] libmachine: (kindnet-167819) Ensuring network default is active
	I1007 11:48:05.640902   57110 main.go:141] libmachine: (kindnet-167819) Ensuring network mk-kindnet-167819 is active
	I1007 11:48:05.641480   57110 main.go:141] libmachine: (kindnet-167819) Getting domain xml...
	I1007 11:48:05.642375   57110 main.go:141] libmachine: (kindnet-167819) Creating domain...
	I1007 11:48:07.001389   57110 main.go:141] libmachine: (kindnet-167819) Waiting to get IP...
	I1007 11:48:07.002506   57110 main.go:141] libmachine: (kindnet-167819) DBG | domain kindnet-167819 has defined MAC address 52:54:00:b8:50:c8 in network mk-kindnet-167819
	I1007 11:48:07.002978   57110 main.go:141] libmachine: (kindnet-167819) DBG | unable to find current IP address of domain kindnet-167819 in network mk-kindnet-167819
	I1007 11:48:07.003004   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:07.002947   57149 retry.go:31] will retry after 249.887761ms: waiting for machine to come up
	I1007 11:48:07.254332   57110 main.go:141] libmachine: (kindnet-167819) DBG | domain kindnet-167819 has defined MAC address 52:54:00:b8:50:c8 in network mk-kindnet-167819
	I1007 11:48:07.254921   57110 main.go:141] libmachine: (kindnet-167819) DBG | unable to find current IP address of domain kindnet-167819 in network mk-kindnet-167819
	I1007 11:48:07.254943   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:07.254878   57149 retry.go:31] will retry after 312.814496ms: waiting for machine to come up
	I1007 11:48:06.775876   56886 main.go:141] libmachine: (pause-328632) Calling .GetIP
	I1007 11:48:06.779335   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:06.779816   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:06.779837   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:06.780141   56886 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1007 11:48:06.785713   56886 kubeadm.go:883] updating cluster {Name:pause-328632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:pause-328632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:48:06.785882   56886 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:48:06.785949   56886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:48:06.845738   56886 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:48:06.845760   56886 crio.go:433] Images already preloaded, skipping extraction
	I1007 11:48:06.845812   56886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:48:06.889201   56886 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:48:06.889224   56886 cache_images.go:84] Images are preloaded, skipping loading
	I1007 11:48:06.889233   56886 kubeadm.go:934] updating node { 192.168.72.219 8443 v1.31.1 crio true true} ...
	I1007 11:48:06.889338   56886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-328632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-328632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:48:06.889427   56886 ssh_runner.go:195] Run: crio config
	I1007 11:48:06.951385   56886 cni.go:84] Creating CNI manager for ""
	I1007 11:48:06.951406   56886 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:48:06.951418   56886 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:48:06.951445   56886 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.219 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-328632 NodeName:pause-328632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 11:48:06.951618   56886 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.219
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-328632"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.219
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.219"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:48:06.951680   56886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:48:06.963463   56886 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:48:06.963536   56886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 11:48:06.975461   56886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 11:48:06.993965   56886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:48:07.012739   56886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I1007 11:48:07.031526   56886 ssh_runner.go:195] Run: grep 192.168.72.219	control-plane.minikube.internal$ /etc/hosts
	I1007 11:48:07.035758   56886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:48:07.202226   56886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:48:07.218569   56886 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632 for IP: 192.168.72.219
	I1007 11:48:07.218591   56886 certs.go:194] generating shared ca certs ...
	I1007 11:48:07.218605   56886 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:48:07.218777   56886 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 11:48:07.218834   56886 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 11:48:07.218848   56886 certs.go:256] generating profile certs ...
	I1007 11:48:07.218958   56886 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/client.key
	I1007 11:48:07.219027   56886 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/apiserver.key.dd135421
	I1007 11:48:07.219089   56886 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/proxy-client.key
	I1007 11:48:07.219224   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 11:48:07.219258   56886 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 11:48:07.219271   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 11:48:07.219308   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 11:48:07.219336   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:48:07.219367   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 11:48:07.219423   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 11:48:07.220081   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:48:07.249790   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 11:48:07.278078   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:48:07.313071   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 11:48:07.342821   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 11:48:07.369334   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 11:48:07.396167   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:48:07.427370   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 11:48:07.457219   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 11:48:07.488520   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:48:07.516753   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 11:48:07.543254   56886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:48:07.563436   56886 ssh_runner.go:195] Run: openssl version
	I1007 11:48:07.570335   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 11:48:07.582600   56886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 11:48:07.587333   56886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 11:48:07.587396   56886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 11:48:07.593384   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 11:48:07.603977   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:48:07.616357   56886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:48:07.621194   56886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:48:07.621270   56886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:48:07.627462   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:48:07.640370   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 11:48:07.653613   56886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 11:48:07.658792   56886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 11:48:07.658865   56886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 11:48:07.665411   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 11:48:07.678525   56886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:48:07.683638   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 11:48:07.690137   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 11:48:07.696834   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 11:48:07.703414   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 11:48:07.710055   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 11:48:07.716619   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 11:48:07.722594   56886 kubeadm.go:392] StartCluster: {Name:pause-328632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:pause-328632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:48:07.722710   56886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 11:48:07.722764   56886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:48:07.923553   56886 cri.go:89] found id: "949e59046784c43201c4d9b447e8274967d2d64d08262b082084f7f81283274f"
	I1007 11:48:07.923583   56886 cri.go:89] found id: "34826c9a89b28965c296398aeec406d6c08dabe388de546f079ad262827a51b6"
	I1007 11:48:07.923588   56886 cri.go:89] found id: "97cadc0102ef22d68c6dfac6e8da330cb407ecdf72db6b70abf8d78d8b8d744c"
	I1007 11:48:07.923592   56886 cri.go:89] found id: "ed3fc62150c6cc0a757f8662ff2ad489307b473010c76c7e4cf39b46fcdc0ab6"
	I1007 11:48:07.923597   56886 cri.go:89] found id: "8738c004ac7649299b52691648c8ddd5b8c96190044c2155d113061ba85992a1"
	I1007 11:48:07.923601   56886 cri.go:89] found id: "013a9b10ece36ffe03232b8539a92f6679b62aa4c5570ff41898ae0975a779d7"
	I1007 11:48:07.923605   56886 cri.go:89] found id: ""
	I1007 11:48:07.923657   56886 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-328632 -n pause-328632
helpers_test.go:261: (dbg) Run:  kubectl --context pause-328632 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-328632 -n pause-328632
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-328632 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-328632 logs -n 25: (1.444978573s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-167819 sudo                 | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-167819 sudo                 | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-167819 sudo find            | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-167819 sudo crio            | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-167819                      | cilium-167819             | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC | 07 Oct 24 11:44 UTC |
	| start   | -p cert-expiration-658191             | cert-expiration-658191    | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC | 07 Oct 24 11:45 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-056919             | running-upgrade-056919    | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC | 07 Oct 24 11:44 UTC |
	| start   | -p force-systemd-flag-468078          | force-systemd-flag-468078 | jenkins | v1.34.0 | 07 Oct 24 11:44 UTC | 07 Oct 24 11:46 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-264062           | force-systemd-env-264062  | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:45 UTC |
	| start   | -p cert-options-495675                | cert-options-495675       | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:46 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-852078          | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:45 UTC |
	| start   | -p kubernetes-upgrade-852078          | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:46 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-468078 ssh cat     | force-systemd-flag-468078 | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-468078          | force-systemd-flag-468078 | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	| start   | -p pause-328632 --memory=2048         | pause-328632              | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:47 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-495675 ssh               | cert-options-495675       | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-495675 -- sudo        | cert-options-495675       | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-495675                | cert-options-495675       | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	| start   | -p auto-167819 --memory=3072          | auto-167819               | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:48 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-852078          | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-852078          | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:47 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-328632                       | pause-328632              | jenkins | v1.34.0 | 07 Oct 24 11:47 UTC | 07 Oct 24 11:48 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-852078          | kubernetes-upgrade-852078 | jenkins | v1.34.0 | 07 Oct 24 11:48 UTC | 07 Oct 24 11:48 UTC |
	| start   | -p kindnet-167819                     | kindnet-167819            | jenkins | v1.34.0 | 07 Oct 24 11:48 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-167819 pgrep -a               | auto-167819               | jenkins | v1.34.0 | 07 Oct 24 11:48 UTC | 07 Oct 24 11:48 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:48:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:48:02.471921   57110 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:48:02.472087   57110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:48:02.472099   57110 out.go:358] Setting ErrFile to fd 2...
	I1007 11:48:02.472105   57110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:48:02.472299   57110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 11:48:02.472916   57110 out.go:352] Setting JSON to false
	I1007 11:48:02.473828   57110 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5376,"bootTime":1728296306,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:48:02.473937   57110 start.go:139] virtualization: kvm guest
	I1007 11:48:02.476582   57110 out.go:177] * [kindnet-167819] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:48:02.477962   57110 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 11:48:02.477962   57110 notify.go:220] Checking for updates...
	I1007 11:48:02.479398   57110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:48:02.480962   57110 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 11:48:02.482873   57110 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:48:02.484197   57110 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:48:02.485735   57110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:48:02.487627   57110 config.go:182] Loaded profile config "auto-167819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:48:02.487764   57110 config.go:182] Loaded profile config "cert-expiration-658191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:48:02.487888   57110 config.go:182] Loaded profile config "pause-328632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:48:02.487964   57110 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:48:02.528021   57110 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 11:48:02.529198   57110 start.go:297] selected driver: kvm2
	I1007 11:48:02.529213   57110 start.go:901] validating driver "kvm2" against <nil>
	I1007 11:48:02.529227   57110 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:48:02.530053   57110 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:48:02.530148   57110 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 11:48:02.545967   57110 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 11:48:02.546009   57110 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:48:02.546250   57110 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:48:02.546284   57110 cni.go:84] Creating CNI manager for "kindnet"
	I1007 11:48:02.546293   57110 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 11:48:02.546336   57110 start.go:340] cluster config:
	{Name:kindnet-167819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-167819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:48:02.546429   57110 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:48:02.548264   57110 out.go:177] * Starting "kindnet-167819" primary control-plane node in "kindnet-167819" cluster
	I1007 11:47:58.459130   56886 machine.go:93] provisionDockerMachine start ...
	I1007 11:47:58.459159   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:47:58.459385   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:58.462380   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.462815   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.462841   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.463002   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:58.463155   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.463317   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.463462   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:58.463633   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:47:58.463913   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:47:58.463928   56886 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 11:47:58.585245   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-328632
	
	I1007 11:47:58.585276   56886 main.go:141] libmachine: (pause-328632) Calling .GetMachineName
	I1007 11:47:58.585498   56886 buildroot.go:166] provisioning hostname "pause-328632"
	I1007 11:47:58.585535   56886 main.go:141] libmachine: (pause-328632) Calling .GetMachineName
	I1007 11:47:58.585749   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:58.588898   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.589360   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.589411   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.589692   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:58.589881   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.590021   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.590133   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:58.590304   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:47:58.590512   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:47:58.590529   56886 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-328632 && echo "pause-328632" | sudo tee /etc/hostname
	I1007 11:47:58.730940   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-328632
	
	I1007 11:47:58.730972   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:58.733998   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.734363   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.734392   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.734586   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:58.734799   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.734960   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:58.735110   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:58.735291   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:47:58.735471   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:47:58.735492   56886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-328632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-328632/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-328632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:47:58.855403   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:47:58.855436   56886 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19761-3912/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-3912/.minikube}
	I1007 11:47:58.855457   56886 buildroot.go:174] setting up certificates
	I1007 11:47:58.855470   56886 provision.go:84] configureAuth start
	I1007 11:47:58.855482   56886 main.go:141] libmachine: (pause-328632) Calling .GetMachineName
	I1007 11:47:58.855768   56886 main.go:141] libmachine: (pause-328632) Calling .GetIP
	I1007 11:47:58.859054   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.859582   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.859618   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.859791   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:58.862799   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.863242   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:58.863267   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:58.863425   56886 provision.go:143] copyHostCerts
	I1007 11:47:58.863479   56886 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem, removing ...
	I1007 11:47:58.863499   56886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem
	I1007 11:47:58.863576   56886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/ca.pem (1082 bytes)
	I1007 11:47:58.863682   56886 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem, removing ...
	I1007 11:47:58.863690   56886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem
	I1007 11:47:58.863719   56886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/cert.pem (1123 bytes)
	I1007 11:47:58.863798   56886 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem, removing ...
	I1007 11:47:58.863807   56886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem
	I1007 11:47:58.863836   56886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-3912/.minikube/key.pem (1675 bytes)
	I1007 11:47:58.863913   56886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem org=jenkins.pause-328632 san=[127.0.0.1 192.168.72.219 localhost minikube pause-328632]
	I1007 11:47:59.190511   56886 provision.go:177] copyRemoteCerts
	I1007 11:47:59.190569   56886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:47:59.190591   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:59.193770   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:59.194175   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:59.194233   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:59.194420   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:59.194618   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:59.194781   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:59.194934   56886 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/pause-328632/id_rsa Username:docker}
	I1007 11:47:59.284938   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 11:47:59.317674   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 11:47:59.349795   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 11:47:59.378706   56886 provision.go:87] duration metric: took 523.223734ms to configureAuth
	I1007 11:47:59.378736   56886 buildroot.go:189] setting minikube options for container-runtime
	I1007 11:47:59.378948   56886 config.go:182] Loaded profile config "pause-328632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:47:59.379014   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:47:59.381971   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:59.382309   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:47:59.382338   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:47:59.382533   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:47:59.382694   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:59.382871   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:47:59.383006   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:47:59.383162   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:47:59.383309   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:47:59.383324   56886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:47:59.825659   56038 pod_ready.go:103] pod "coredns-7c65d6cfc9-4wtfv" in "kube-system" namespace has status "Ready":"False"
	I1007 11:48:02.325057   56038 pod_ready.go:103] pod "coredns-7c65d6cfc9-4wtfv" in "kube-system" namespace has status "Ready":"False"
	I1007 11:48:02.549476   57110 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:48:02.549511   57110 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 11:48:02.549518   57110 cache.go:56] Caching tarball of preloaded images
	I1007 11:48:02.549595   57110 preload.go:172] Found /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 11:48:02.549607   57110 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:48:02.549715   57110 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kindnet-167819/config.json ...
	I1007 11:48:02.549734   57110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/kindnet-167819/config.json: {Name:mk3772bf1d8e7c3dfddf1e4e448acf7d973f76ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:48:02.549898   57110 start.go:360] acquireMachinesLock for kindnet-167819: {Name:mk033590ff8b68d4c87fe83ea6754c8c0328ac7c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 11:48:05.165391   57110 start.go:364] duration metric: took 2.615457037s to acquireMachinesLock for "kindnet-167819"
	I1007 11:48:05.165451   57110 start.go:93] Provisioning new machine with config: &{Name:kindnet-167819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-167819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:48:05.165595   57110 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 11:48:04.915170   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:48:04.915204   56886 machine.go:96] duration metric: took 6.456056821s to provisionDockerMachine
	I1007 11:48:04.915218   56886 start.go:293] postStartSetup for "pause-328632" (driver="kvm2")
	I1007 11:48:04.915231   56886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:48:04.915255   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:04.915580   56886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:48:04.915614   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:48:04.918516   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:04.918982   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:04.919013   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:04.919154   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:48:04.919364   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:04.919506   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:48:04.919647   56886 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/pause-328632/id_rsa Username:docker}
	I1007 11:48:05.007137   56886 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:48:05.011837   56886 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 11:48:05.011861   56886 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/addons for local assets ...
	I1007 11:48:05.011931   56886 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-3912/.minikube/files for local assets ...
	I1007 11:48:05.012053   56886 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem -> 110962.pem in /etc/ssl/certs
	I1007 11:48:05.012141   56886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 11:48:05.021937   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /etc/ssl/certs/110962.pem (1708 bytes)
	I1007 11:48:05.048629   56886 start.go:296] duration metric: took 133.395142ms for postStartSetup
	I1007 11:48:05.048673   56886 fix.go:56] duration metric: took 6.615502714s for fixHost
	I1007 11:48:05.048697   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:48:05.051614   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.051954   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:05.052005   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.052164   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:48:05.052337   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:05.052472   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:05.052624   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:48:05.052813   56886 main.go:141] libmachine: Using SSH client type: native
	I1007 11:48:05.052984   56886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.72.219 22 <nil> <nil>}
	I1007 11:48:05.053019   56886 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 11:48:05.165216   56886 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728301685.154599192
	
	I1007 11:48:05.165238   56886 fix.go:216] guest clock: 1728301685.154599192
	I1007 11:48:05.165246   56886 fix.go:229] Guest: 2024-10-07 11:48:05.154599192 +0000 UTC Remote: 2024-10-07 11:48:05.048678627 +0000 UTC m=+6.863515987 (delta=105.920565ms)
	I1007 11:48:05.165289   56886 fix.go:200] guest clock delta is within tolerance: 105.920565ms
	I1007 11:48:05.165293   56886 start.go:83] releasing machines lock for "pause-328632", held for 6.732139355s
	I1007 11:48:05.165319   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:05.165595   56886 main.go:141] libmachine: (pause-328632) Calling .GetIP
	I1007 11:48:05.169272   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.169647   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:05.169671   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.169882   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:05.170455   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:05.170627   56886 main.go:141] libmachine: (pause-328632) Calling .DriverName
	I1007 11:48:05.170702   56886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:48:05.170753   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:48:05.170859   56886 ssh_runner.go:195] Run: cat /version.json
	I1007 11:48:05.170886   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHHostname
	I1007 11:48:05.173964   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.174057   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.174320   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:05.174343   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.174487   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:48:05.174587   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:05.174617   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:05.174685   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:05.174774   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHPort
	I1007 11:48:05.174845   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:48:05.174908   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHKeyPath
	I1007 11:48:05.174966   56886 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/pause-328632/id_rsa Username:docker}
	I1007 11:48:05.174998   56886 main.go:141] libmachine: (pause-328632) Calling .GetSSHUsername
	I1007 11:48:05.175091   56886 sshutil.go:53] new ssh client: &{IP:192.168.72.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/pause-328632/id_rsa Username:docker}
	I1007 11:48:05.283569   56886 ssh_runner.go:195] Run: systemctl --version
	I1007 11:48:05.290655   56886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:48:05.456517   56886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 11:48:05.464780   56886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 11:48:05.464851   56886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:48:05.476708   56886 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 11:48:05.476736   56886 start.go:495] detecting cgroup driver to use...
	I1007 11:48:05.476820   56886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:48:05.494023   56886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:48:05.510117   56886 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:48:05.510197   56886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:48:05.525350   56886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:48:05.540104   56886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:48:05.688555   56886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:48:05.827204   56886 docker.go:233] disabling docker service ...
	I1007 11:48:05.827279   56886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:48:05.847161   56886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:48:05.863040   56886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:48:05.994082   56886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:48:06.124692   56886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:48:06.141638   56886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:48:06.164908   56886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:48:06.164978   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.176269   56886 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:48:06.176338   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.186959   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.198757   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.210499   56886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:48:06.222580   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.233927   56886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.246494   56886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:48:06.259303   56886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:48:06.270953   56886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:48:06.282702   56886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:48:06.416347   56886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:48:06.642267   56886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:48:06.642331   56886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:48:06.648296   56886 start.go:563] Will wait 60s for crictl version
	I1007 11:48:06.648352   56886 ssh_runner.go:195] Run: which crictl
	I1007 11:48:06.652658   56886 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:48:06.703409   56886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 11:48:06.703487   56886 ssh_runner.go:195] Run: crio --version
	I1007 11:48:06.738106   56886 ssh_runner.go:195] Run: crio --version
	I1007 11:48:06.774650   56886 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 11:48:05.167712   57110 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 11:48:05.167921   57110 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:48:05.168004   57110 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:48:05.188955   57110 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I1007 11:48:05.189466   57110 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:48:05.190016   57110 main.go:141] libmachine: Using API Version  1
	I1007 11:48:05.190053   57110 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:48:05.190442   57110 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:48:05.190655   57110 main.go:141] libmachine: (kindnet-167819) Calling .GetMachineName
	I1007 11:48:05.190810   57110 main.go:141] libmachine: (kindnet-167819) Calling .DriverName
	I1007 11:48:05.191033   57110 start.go:159] libmachine.API.Create for "kindnet-167819" (driver="kvm2")
	I1007 11:48:05.191071   57110 client.go:168] LocalClient.Create starting
	I1007 11:48:05.191117   57110 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem
	I1007 11:48:05.191166   57110 main.go:141] libmachine: Decoding PEM data...
	I1007 11:48:05.191190   57110 main.go:141] libmachine: Parsing certificate...
	I1007 11:48:05.191261   57110 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem
	I1007 11:48:05.191290   57110 main.go:141] libmachine: Decoding PEM data...
	I1007 11:48:05.191311   57110 main.go:141] libmachine: Parsing certificate...
	I1007 11:48:05.191336   57110 main.go:141] libmachine: Running pre-create checks...
	I1007 11:48:05.191349   57110 main.go:141] libmachine: (kindnet-167819) Calling .PreCreateCheck
	I1007 11:48:05.191733   57110 main.go:141] libmachine: (kindnet-167819) Calling .GetConfigRaw
	I1007 11:48:05.192189   57110 main.go:141] libmachine: Creating machine...
	I1007 11:48:05.192205   57110 main.go:141] libmachine: (kindnet-167819) Calling .Create
	I1007 11:48:05.192326   57110 main.go:141] libmachine: (kindnet-167819) Creating KVM machine...
	I1007 11:48:05.193774   57110 main.go:141] libmachine: (kindnet-167819) DBG | found existing default KVM network
	I1007 11:48:05.195597   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:05.195422   57149 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000300090}
	I1007 11:48:05.195667   57110 main.go:141] libmachine: (kindnet-167819) DBG | created network xml: 
	I1007 11:48:05.195688   57110 main.go:141] libmachine: (kindnet-167819) DBG | <network>
	I1007 11:48:05.195699   57110 main.go:141] libmachine: (kindnet-167819) DBG |   <name>mk-kindnet-167819</name>
	I1007 11:48:05.195710   57110 main.go:141] libmachine: (kindnet-167819) DBG |   <dns enable='no'/>
	I1007 11:48:05.195721   57110 main.go:141] libmachine: (kindnet-167819) DBG |   
	I1007 11:48:05.195734   57110 main.go:141] libmachine: (kindnet-167819) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 11:48:05.195746   57110 main.go:141] libmachine: (kindnet-167819) DBG |     <dhcp>
	I1007 11:48:05.195755   57110 main.go:141] libmachine: (kindnet-167819) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 11:48:05.195766   57110 main.go:141] libmachine: (kindnet-167819) DBG |     </dhcp>
	I1007 11:48:05.195773   57110 main.go:141] libmachine: (kindnet-167819) DBG |   </ip>
	I1007 11:48:05.195783   57110 main.go:141] libmachine: (kindnet-167819) DBG |   
	I1007 11:48:05.195794   57110 main.go:141] libmachine: (kindnet-167819) DBG | </network>
	I1007 11:48:05.195825   57110 main.go:141] libmachine: (kindnet-167819) DBG | 
	I1007 11:48:05.201794   57110 main.go:141] libmachine: (kindnet-167819) DBG | trying to create private KVM network mk-kindnet-167819 192.168.39.0/24...
	I1007 11:48:05.277661   57110 main.go:141] libmachine: (kindnet-167819) DBG | private KVM network mk-kindnet-167819 192.168.39.0/24 created
	I1007 11:48:05.277694   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:05.277622   57149 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:48:05.277749   57110 main.go:141] libmachine: (kindnet-167819) Setting up store path in /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819 ...
	I1007 11:48:05.277791   57110 main.go:141] libmachine: (kindnet-167819) Building disk image from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 11:48:05.277831   57110 main.go:141] libmachine: (kindnet-167819) Downloading /home/jenkins/minikube-integration/19761-3912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 11:48:05.545703   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:05.545611   57149 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819/id_rsa...
	I1007 11:48:05.632204   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:05.632065   57149 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819/kindnet-167819.rawdisk...
	I1007 11:48:05.632248   57110 main.go:141] libmachine: (kindnet-167819) DBG | Writing magic tar header
	I1007 11:48:05.632267   57110 main.go:141] libmachine: (kindnet-167819) DBG | Writing SSH key tar header
	I1007 11:48:05.632293   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:05.632238   57149 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819 ...
	I1007 11:48:05.632520   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819
	I1007 11:48:05.632570   57110 main.go:141] libmachine: (kindnet-167819) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819 (perms=drwx------)
	I1007 11:48:05.632590   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube/machines
	I1007 11:48:05.632613   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 11:48:05.632625   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19761-3912
	I1007 11:48:05.632641   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 11:48:05.632648   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home/jenkins
	I1007 11:48:05.632657   57110 main.go:141] libmachine: (kindnet-167819) DBG | Checking permissions on dir: /home
	I1007 11:48:05.632667   57110 main.go:141] libmachine: (kindnet-167819) DBG | Skipping /home - not owner
	I1007 11:48:05.632678   57110 main.go:141] libmachine: (kindnet-167819) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube/machines (perms=drwxr-xr-x)
	I1007 11:48:05.632692   57110 main.go:141] libmachine: (kindnet-167819) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912/.minikube (perms=drwxr-xr-x)
	I1007 11:48:05.632709   57110 main.go:141] libmachine: (kindnet-167819) Setting executable bit set on /home/jenkins/minikube-integration/19761-3912 (perms=drwxrwxr-x)
	I1007 11:48:05.632739   57110 main.go:141] libmachine: (kindnet-167819) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 11:48:05.632775   57110 main.go:141] libmachine: (kindnet-167819) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 11:48:05.632822   57110 main.go:141] libmachine: (kindnet-167819) Creating domain...
	I1007 11:48:05.633758   57110 main.go:141] libmachine: (kindnet-167819) define libvirt domain using xml: 
	I1007 11:48:05.633781   57110 main.go:141] libmachine: (kindnet-167819) <domain type='kvm'>
	I1007 11:48:05.633802   57110 main.go:141] libmachine: (kindnet-167819)   <name>kindnet-167819</name>
	I1007 11:48:05.633822   57110 main.go:141] libmachine: (kindnet-167819)   <memory unit='MiB'>3072</memory>
	I1007 11:48:05.633831   57110 main.go:141] libmachine: (kindnet-167819)   <vcpu>2</vcpu>
	I1007 11:48:05.633841   57110 main.go:141] libmachine: (kindnet-167819)   <features>
	I1007 11:48:05.633850   57110 main.go:141] libmachine: (kindnet-167819)     <acpi/>
	I1007 11:48:05.633860   57110 main.go:141] libmachine: (kindnet-167819)     <apic/>
	I1007 11:48:05.633867   57110 main.go:141] libmachine: (kindnet-167819)     <pae/>
	I1007 11:48:05.633875   57110 main.go:141] libmachine: (kindnet-167819)     
	I1007 11:48:05.633890   57110 main.go:141] libmachine: (kindnet-167819)   </features>
	I1007 11:48:05.633900   57110 main.go:141] libmachine: (kindnet-167819)   <cpu mode='host-passthrough'>
	I1007 11:48:05.633907   57110 main.go:141] libmachine: (kindnet-167819)   
	I1007 11:48:05.633916   57110 main.go:141] libmachine: (kindnet-167819)   </cpu>
	I1007 11:48:05.633924   57110 main.go:141] libmachine: (kindnet-167819)   <os>
	I1007 11:48:05.633931   57110 main.go:141] libmachine: (kindnet-167819)     <type>hvm</type>
	I1007 11:48:05.633942   57110 main.go:141] libmachine: (kindnet-167819)     <boot dev='cdrom'/>
	I1007 11:48:05.633951   57110 main.go:141] libmachine: (kindnet-167819)     <boot dev='hd'/>
	I1007 11:48:05.633962   57110 main.go:141] libmachine: (kindnet-167819)     <bootmenu enable='no'/>
	I1007 11:48:05.633969   57110 main.go:141] libmachine: (kindnet-167819)   </os>
	I1007 11:48:05.633978   57110 main.go:141] libmachine: (kindnet-167819)   <devices>
	I1007 11:48:05.633984   57110 main.go:141] libmachine: (kindnet-167819)     <disk type='file' device='cdrom'>
	I1007 11:48:05.634005   57110 main.go:141] libmachine: (kindnet-167819)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819/boot2docker.iso'/>
	I1007 11:48:05.634014   57110 main.go:141] libmachine: (kindnet-167819)       <target dev='hdc' bus='scsi'/>
	I1007 11:48:05.634021   57110 main.go:141] libmachine: (kindnet-167819)       <readonly/>
	I1007 11:48:05.634029   57110 main.go:141] libmachine: (kindnet-167819)     </disk>
	I1007 11:48:05.634037   57110 main.go:141] libmachine: (kindnet-167819)     <disk type='file' device='disk'>
	I1007 11:48:05.634048   57110 main.go:141] libmachine: (kindnet-167819)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 11:48:05.634072   57110 main.go:141] libmachine: (kindnet-167819)       <source file='/home/jenkins/minikube-integration/19761-3912/.minikube/machines/kindnet-167819/kindnet-167819.rawdisk'/>
	I1007 11:48:05.634094   57110 main.go:141] libmachine: (kindnet-167819)       <target dev='hda' bus='virtio'/>
	I1007 11:48:05.634104   57110 main.go:141] libmachine: (kindnet-167819)     </disk>
	I1007 11:48:05.634117   57110 main.go:141] libmachine: (kindnet-167819)     <interface type='network'>
	I1007 11:48:05.634129   57110 main.go:141] libmachine: (kindnet-167819)       <source network='mk-kindnet-167819'/>
	I1007 11:48:05.634146   57110 main.go:141] libmachine: (kindnet-167819)       <model type='virtio'/>
	I1007 11:48:05.634159   57110 main.go:141] libmachine: (kindnet-167819)     </interface>
	I1007 11:48:05.634172   57110 main.go:141] libmachine: (kindnet-167819)     <interface type='network'>
	I1007 11:48:05.634215   57110 main.go:141] libmachine: (kindnet-167819)       <source network='default'/>
	I1007 11:48:05.634224   57110 main.go:141] libmachine: (kindnet-167819)       <model type='virtio'/>
	I1007 11:48:05.634241   57110 main.go:141] libmachine: (kindnet-167819)     </interface>
	I1007 11:48:05.634253   57110 main.go:141] libmachine: (kindnet-167819)     <serial type='pty'>
	I1007 11:48:05.634267   57110 main.go:141] libmachine: (kindnet-167819)       <target port='0'/>
	I1007 11:48:05.634278   57110 main.go:141] libmachine: (kindnet-167819)     </serial>
	I1007 11:48:05.634289   57110 main.go:141] libmachine: (kindnet-167819)     <console type='pty'>
	I1007 11:48:05.634306   57110 main.go:141] libmachine: (kindnet-167819)       <target type='serial' port='0'/>
	I1007 11:48:05.634319   57110 main.go:141] libmachine: (kindnet-167819)     </console>
	I1007 11:48:05.634328   57110 main.go:141] libmachine: (kindnet-167819)     <rng model='virtio'>
	I1007 11:48:05.634342   57110 main.go:141] libmachine: (kindnet-167819)       <backend model='random'>/dev/random</backend>
	I1007 11:48:05.634354   57110 main.go:141] libmachine: (kindnet-167819)     </rng>
	I1007 11:48:05.634374   57110 main.go:141] libmachine: (kindnet-167819)     
	I1007 11:48:05.634391   57110 main.go:141] libmachine: (kindnet-167819)     
	I1007 11:48:05.634402   57110 main.go:141] libmachine: (kindnet-167819)   </devices>
	I1007 11:48:05.634411   57110 main.go:141] libmachine: (kindnet-167819) </domain>
	I1007 11:48:05.634426   57110 main.go:141] libmachine: (kindnet-167819) 
	I1007 11:48:05.638977   57110 main.go:141] libmachine: (kindnet-167819) DBG | domain kindnet-167819 has defined MAC address 52:54:00:f9:43:d1 in network default
	I1007 11:48:05.639669   57110 main.go:141] libmachine: (kindnet-167819) Ensuring networks are active...
	I1007 11:48:05.639693   57110 main.go:141] libmachine: (kindnet-167819) DBG | domain kindnet-167819 has defined MAC address 52:54:00:b8:50:c8 in network mk-kindnet-167819
	I1007 11:48:05.640448   57110 main.go:141] libmachine: (kindnet-167819) Ensuring network default is active
	I1007 11:48:05.640902   57110 main.go:141] libmachine: (kindnet-167819) Ensuring network mk-kindnet-167819 is active
	I1007 11:48:05.641480   57110 main.go:141] libmachine: (kindnet-167819) Getting domain xml...
	I1007 11:48:05.642375   57110 main.go:141] libmachine: (kindnet-167819) Creating domain...
	I1007 11:48:07.001389   57110 main.go:141] libmachine: (kindnet-167819) Waiting to get IP...
	I1007 11:48:07.002506   57110 main.go:141] libmachine: (kindnet-167819) DBG | domain kindnet-167819 has defined MAC address 52:54:00:b8:50:c8 in network mk-kindnet-167819
	I1007 11:48:07.002978   57110 main.go:141] libmachine: (kindnet-167819) DBG | unable to find current IP address of domain kindnet-167819 in network mk-kindnet-167819
	I1007 11:48:07.003004   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:07.002947   57149 retry.go:31] will retry after 249.887761ms: waiting for machine to come up
	I1007 11:48:07.254332   57110 main.go:141] libmachine: (kindnet-167819) DBG | domain kindnet-167819 has defined MAC address 52:54:00:b8:50:c8 in network mk-kindnet-167819
	I1007 11:48:07.254921   57110 main.go:141] libmachine: (kindnet-167819) DBG | unable to find current IP address of domain kindnet-167819 in network mk-kindnet-167819
	I1007 11:48:07.254943   57110 main.go:141] libmachine: (kindnet-167819) DBG | I1007 11:48:07.254878   57149 retry.go:31] will retry after 312.814496ms: waiting for machine to come up
	I1007 11:48:06.775876   56886 main.go:141] libmachine: (pause-328632) Calling .GetIP
	I1007 11:48:06.779335   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:06.779816   56886 main.go:141] libmachine: (pause-328632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:c8:60", ip: ""} in network mk-pause-328632: {Iface:virbr3 ExpiryTime:2024-10-07 12:46:49 +0000 UTC Type:0 Mac:52:54:00:e7:c8:60 Iaid: IPaddr:192.168.72.219 Prefix:24 Hostname:pause-328632 Clientid:01:52:54:00:e7:c8:60}
	I1007 11:48:06.779837   56886 main.go:141] libmachine: (pause-328632) DBG | domain pause-328632 has defined IP address 192.168.72.219 and MAC address 52:54:00:e7:c8:60 in network mk-pause-328632
	I1007 11:48:06.780141   56886 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1007 11:48:06.785713   56886 kubeadm.go:883] updating cluster {Name:pause-328632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:pause-328632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:48:06.785882   56886 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:48:06.785949   56886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:48:06.845738   56886 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:48:06.845760   56886 crio.go:433] Images already preloaded, skipping extraction
	I1007 11:48:06.845812   56886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:48:06.889201   56886 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:48:06.889224   56886 cache_images.go:84] Images are preloaded, skipping loading
	I1007 11:48:06.889233   56886 kubeadm.go:934] updating node { 192.168.72.219 8443 v1.31.1 crio true true} ...
	I1007 11:48:06.889338   56886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-328632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-328632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:48:06.889427   56886 ssh_runner.go:195] Run: crio config
	I1007 11:48:06.951385   56886 cni.go:84] Creating CNI manager for ""
	I1007 11:48:06.951406   56886 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:48:06.951418   56886 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:48:06.951445   56886 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.219 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-328632 NodeName:pause-328632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 11:48:06.951618   56886 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.219
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-328632"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.219
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.219"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:48:06.951680   56886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:48:06.963463   56886 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:48:06.963536   56886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 11:48:06.975461   56886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1007 11:48:06.993965   56886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:48:07.012739   56886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I1007 11:48:07.031526   56886 ssh_runner.go:195] Run: grep 192.168.72.219	control-plane.minikube.internal$ /etc/hosts
	I1007 11:48:07.035758   56886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:48:07.202226   56886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:48:07.218569   56886 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632 for IP: 192.168.72.219
	I1007 11:48:07.218591   56886 certs.go:194] generating shared ca certs ...
	I1007 11:48:07.218605   56886 certs.go:226] acquiring lock for ca certs: {Name:mkf57a9688688af6e57e75bbfa1f06ec72c2c052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:48:07.218777   56886 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key
	I1007 11:48:07.218834   56886 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key
	I1007 11:48:07.218848   56886 certs.go:256] generating profile certs ...
	I1007 11:48:07.218958   56886 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/client.key
	I1007 11:48:07.219027   56886 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/apiserver.key.dd135421
	I1007 11:48:07.219089   56886 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/proxy-client.key
	I1007 11:48:07.219224   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem (1338 bytes)
	W1007 11:48:07.219258   56886 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096_empty.pem, impossibly tiny 0 bytes
	I1007 11:48:07.219271   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 11:48:07.219308   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/ca.pem (1082 bytes)
	I1007 11:48:07.219336   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:48:07.219367   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/certs/key.pem (1675 bytes)
	I1007 11:48:07.219423   56886 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem (1708 bytes)
	I1007 11:48:07.220081   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:48:07.249790   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 11:48:07.278078   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:48:07.313071   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 11:48:07.342821   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 11:48:07.369334   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 11:48:07.396167   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:48:07.427370   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/pause-328632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 11:48:07.457219   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/ssl/certs/110962.pem --> /usr/share/ca-certificates/110962.pem (1708 bytes)
	I1007 11:48:07.488520   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:48:07.516753   56886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-3912/.minikube/certs/11096.pem --> /usr/share/ca-certificates/11096.pem (1338 bytes)
	I1007 11:48:07.543254   56886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:48:07.563436   56886 ssh_runner.go:195] Run: openssl version
	I1007 11:48:07.570335   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110962.pem && ln -fs /usr/share/ca-certificates/110962.pem /etc/ssl/certs/110962.pem"
	I1007 11:48:07.582600   56886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110962.pem
	I1007 11:48:07.587333   56886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:41 /usr/share/ca-certificates/110962.pem
	I1007 11:48:07.587396   56886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110962.pem
	I1007 11:48:07.593384   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110962.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 11:48:07.603977   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:48:07.616357   56886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:48:07.621194   56886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:48:07.621270   56886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:48:07.627462   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:48:07.640370   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11096.pem && ln -fs /usr/share/ca-certificates/11096.pem /etc/ssl/certs/11096.pem"
	I1007 11:48:07.653613   56886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11096.pem
	I1007 11:48:07.658792   56886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:41 /usr/share/ca-certificates/11096.pem
	I1007 11:48:07.658865   56886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11096.pem
	I1007 11:48:07.665411   56886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11096.pem /etc/ssl/certs/51391683.0"
	I1007 11:48:07.678525   56886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:48:07.683638   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 11:48:07.690137   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 11:48:07.696834   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 11:48:07.703414   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 11:48:07.710055   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 11:48:07.716619   56886 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 11:48:07.722594   56886 kubeadm.go:392] StartCluster: {Name:pause-328632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:pause-328632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:48:07.722710   56886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 11:48:07.722764   56886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:48:07.923553   56886 cri.go:89] found id: "949e59046784c43201c4d9b447e8274967d2d64d08262b082084f7f81283274f"
	I1007 11:48:07.923583   56886 cri.go:89] found id: "34826c9a89b28965c296398aeec406d6c08dabe388de546f079ad262827a51b6"
	I1007 11:48:07.923588   56886 cri.go:89] found id: "97cadc0102ef22d68c6dfac6e8da330cb407ecdf72db6b70abf8d78d8b8d744c"
	I1007 11:48:07.923592   56886 cri.go:89] found id: "ed3fc62150c6cc0a757f8662ff2ad489307b473010c76c7e4cf39b46fcdc0ab6"
	I1007 11:48:07.923597   56886 cri.go:89] found id: "8738c004ac7649299b52691648c8ddd5b8c96190044c2155d113061ba85992a1"
	I1007 11:48:07.923601   56886 cri.go:89] found id: "013a9b10ece36ffe03232b8539a92f6679b62aa4c5570ff41898ae0975a779d7"
	I1007 11:48:07.923605   56886 cri.go:89] found id: ""
	I1007 11:48:07.923657   56886 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-328632 -n pause-328632
helpers_test.go:261: (dbg) Run:  kubectl --context pause-328632 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (32.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7200.056s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
E1007 12:19:36.380641   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
E1007 12:20:02.977759   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/calico-167819/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
E1007 12:20:08.250550   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
E1007 12:20:22.726419   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/custom-flannel-167819/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
E1007 12:21:14.792857   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/enable-default-cni-167819/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.75:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.75:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (37m4s)
		TestNetworkPlugins/group (28m31s)
		TestStartStop (35m28s)
		TestStartStop/group/default-k8s-diff-port (18m42s)
		TestStartStop/group/default-k8s-diff-port/serial (18m42s)
		TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (4m56s)
		TestStartStop/group/embed-certs (28m31s)
		TestStartStop/group/embed-certs/serial (28m31s)
		TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5m37s)
		TestStartStop/group/no-preload (29m9s)
		TestStartStop/group/no-preload/serial (29m9s)
		TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5m28s)
		TestStartStop/group/old-k8s-version (29m55s)
		TestStartStop/group/old-k8s-version/serial (29m55s)
		TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (2m25s)

                                                
                                                
goroutine 7879 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00066ab60, 0xc000859bc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc0005e6048, {0x51b7ac0, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x411b30?, 0x52cfca0?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc00062de00)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00062de00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 5 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0004af400)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 3432 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc000100700}, 0xc000486750, 0xc000486798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc000100700}, 0xc0?, 0xc000486750, 0xc000486798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc000100700?}, 0xc001eb81a0?, 0x559d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0004867d0?, 0x593fe4?, 0xc0017042a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3459
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 7148 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394fae8, 0xc000bb0cd0}, {0x3943440, 0xc0004a4d60}, 0x1, 0x0, 0xc001e41b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394fb58?, 0xc00059c230?}, 0x3b9aca00, 0xc001421d38?, 0x1, 0xc001421b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394fb58, 0xc00059c230}, 0xc00066b040, {0xc001d26948, 0x16}, {0x2c60d4f, 0x14}, {0x2c76d8d, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x394fb58, 0xc00059c230}, 0xc00066b040, {0xc001d26948, 0x16}, {0x2c52c89?, 0xc002271760?}, {0x559473?, 0x4b186f?}, {0xc000be8480, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x125
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00066b040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00066b040, 0xc0001a4600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3904
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 163 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0002159c0, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 122
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3265 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc000100700}, 0xc001524f50, 0xc001524f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc000100700}, 0xc0?, 0xc001524f50, 0xc001524f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc000100700?}, 0xc001eb84e0?, 0x559d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0008ca7d0?, 0x593fe4?, 0xc0017042a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3319
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 6207 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394fae8, 0xc00086f6d0}, {0x3943440, 0xc0005b1420}, 0x1, 0x0, 0xc000091b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394fb58?, 0xc00002a070?}, 0x3b9aca00, 0xc001463d38?, 0x1, 0xc001463b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394fb58, 0xc00002a070}, 0xc00066b860, {0xc001d26030, 0x11}, {0x2c60d4f, 0x14}, {0x2c76d8d, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x394fb58, 0xc00002a070}, 0xc00066b860, {0xc001d26030, 0x11}, {0x2c4767d?, 0xc001587760?}, {0x559473?, 0x4b186f?}, {0xc000bc2100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x125
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00066b860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00066b860, 0xc001e8e080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4012
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3318 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3314
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1560 [chan send, 96 minutes]:
os/exec.(*Cmd).watchCtx(0xc00196fc80, 0xc001961a40)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1248
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 126 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000215990, 0x2c)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0000d1d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0002159c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008c4800, {0x3916f20, 0xc0005f62a0}, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008c4800, 0x3b9aca00, 0x0, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 163
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4789 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4788
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4129 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0014cf700, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4177
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1788 [select, 96 minutes]:
net/http.(*persistConn).readLoop(0xc001f259e0)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 1786
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 3242 [chan receive, 36 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00070c820, 0x35da630)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2637
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 127 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc000100700}, 0xc000488f50, 0xc001480f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc000100700}, 0x80?, 0xc000488f50, 0xc000488f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc000100700?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000488fd0?, 0x593fe4?, 0xc000101180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 163
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 162 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 122
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1341 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1369
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 128 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 127
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4308 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4338
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1730 [chan send, 96 minutes]:
os/exec.(*Cmd).watchCtx(0xc0016f0180, 0xc0013f3500)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1649
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 1570 [chan send, 96 minutes]:
os/exec.(*Cmd).watchCtx(0xc001da0180, 0xc0018d8d20)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1505
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 4309 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00172a840, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4338
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3459 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019669c0, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3393
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4128 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4177
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4181 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0014cf6d0, 0x5)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000857d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0014cf700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008beee0, {0x3916f20, 0xc00192e330}, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008beee0, 0x3b9aca00, 0x0, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4129
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3458 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3393
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2663 [chan receive, 38 minutes]:
testing.(*T).Run(0xc001604340, {0x2c3cf87?, 0x5595bc?}, 0xc001a2aba0)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001604340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc001604340, 0x35da3f0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4803 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a6e600, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4769
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3433 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3432
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3245 [chan receive, 18 minutes]:
testing.(*T).Run(0xc00070d520, {0x2c3e385?, 0x0?}, 0xc001d14200)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00070d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00070d520, 0xc00172a140)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3242
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3706 [chan receive, 30 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0005e1d80, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3829
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4343 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc000100700}, 0xc0015ab750, 0xc0015ab798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc000100700}, 0x40?, 0xc0015ab750, 0xc0015ab798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc000100700?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015ab7d0?, 0x593fe4?, 0xc000bdb340?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4309
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3812 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc000100700}, 0xc001585f50, 0xc001585f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc000100700}, 0xd0?, 0xc001585f50, 0xc001585f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc000100700?}, 0x9e9a36?, 0xc001d2d380?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001585fd0?, 0x593fe4?, 0xc000bda4d0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3706
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 6368 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394fb58, 0xc0003baaf0}, {0x3943440, 0xc0016ad380}, 0x1, 0x0, 0xc001e3dc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394fb58?, 0xc0005d8000?}, 0x3b9aca00, 0xc0016a7e10?, 0x1, 0xc0016a7c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394fb58, 0xc0005d8000}, 0xc00066ba00, {0xc001a74000, 0x1c}, {0x2c60d4f, 0x14}, {0x2c76d8d, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x394fb58, 0xc0005d8000}, 0xc00066ba00, {0xc001a74000, 0x1c}, {0x2c638ed?, 0xc001b86f60?}, {0x559473?, 0x4b186f?}, {0xc001848100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00066ba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00066ba00, 0xc00071ac00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4268
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3560 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001967f00, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3558
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3705 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3829
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1160 [IO wait, 101 minutes]:
internal/poll.runtime_pollWait(0x7f522d63e858, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0006aeb00?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0006aeb00)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0006aeb00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0005e1e80)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0005e1e80)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc00023fa40, {0x3942de0, 0xc0005e1e80})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc00023fa40)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0016044e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1157
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 4182 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc000100700}, 0xc0015aa750, 0xc001590f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc000100700}, 0xf0?, 0xc0015aa750, 0xc0015aa798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc000100700?}, 0x10000c0014d7ba0?, 0x559d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015aa7d0?, 0x593fe4?, 0xc001d2c000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4129
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1313 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1312
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3813 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3812
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1311 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0007298d0, 0x28)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001625d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000729980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001f0d1c0, {0x3916f20, 0xc000a6ae40}, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001f0d1c0, 0x3b9aca00, 0x0, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1342
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3559 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3558
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3319 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019666c0, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3314
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3923 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc001967a90, 0x16)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001709d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001967ac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001753480, {0x3916f20, 0xc001705890}, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001753480, 0x3b9aca00, 0x0, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3864
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4410 [IO wait]:
internal/poll.runtime_pollWait(0x7f522c38de20, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001d15300?, 0xc0008b7000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d15300, {0xc0008b7000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc001d15300, {0xc0008b7000?, 0x10?, 0xc00198a8a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0015644a0, {0xc0008b7000?, 0xc0008b7005?, 0x70?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001a2a768, {0xc0008b7000?, 0x0?, 0xc001a2a768?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0015a02b8, {0x3917560, 0xc001a2a768})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0015a0008, {0x7f522c33c2c0, 0xc001dcea98}, 0xc00198aa10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0015a0008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0015a0008, {0xc001596000, 0x1000, 0xc001a78380?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc0016b2b40, {0xc0004c7a80, 0x9, 0x5168880?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3915620, 0xc0016b2b40}, {0xc0004c7a80, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0004c7a80, 0x9, 0x47b965?}, {0x3915620?, 0xc0016b2b40?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0004c7a40)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00198afa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00176a600)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 4409
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 2831 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009127d0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1666 +0x5e5
testing.tRunner(0xc0014d7a00, 0xc001a2aba0)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2663
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3246 [chan receive, 29 minutes]:
testing.(*T).Run(0xc00070d6c0, {0x2c3e385?, 0x0?}, 0xc00040c200)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00070d6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00070d6c0, 0xc00172a180)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3242
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3904 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0014d7860, {0x2c60db3?, 0xc0015aa570?}, 0xc0001a4600)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0014d7860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0014d7860, 0xc0020b0300)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3243
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1312 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc000100700}, 0xc000488750, 0xc00148df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc000100700}, 0xd0?, 0xc000488750, 0xc000488798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc000100700?}, 0xc00070c1a0?, 0x559d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0004887d0?, 0x593fe4?, 0xc000bda4d0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1342
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4787 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000a6e510, 0x2)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001ad9d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a6e600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008c4080, {0x3916f20, 0xc001bac000}, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008c4080, 0x3b9aca00, 0x0, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4803
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1789 [select, 96 minutes]:
net/http.(*persistConn).writeLoop(0xc001f259e0)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 1786
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 4066 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001534440, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3965
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3574 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001967ed0, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001522d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001967f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001c7b6c0, {0x3916f20, 0xc001d342d0}, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001c7b6c0, 0x3b9aca00, 0x0, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3560
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4012 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001eb81a0, {0x2c60db3?, 0xc001588570?}, 0xc001e8e080)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001eb81a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001eb81a0, 0xc00040c200)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3246
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3575 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc000100700}, 0xc000be4750, 0xc000be4798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc000100700}, 0x0?, 0xc000be4750, 0xc000be4798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc000100700?}, 0x9e9a36?, 0xc001406600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000be47d0?, 0x593fe4?, 0xc001960700?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3560
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1342 [chan receive, 97 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000729980, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1369
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4788 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc000100700}, 0xc0008d6f50, 0xc0008d6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc000100700}, 0xb0?, 0xc0008d6f50, 0xc0008d6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc000100700?}, 0x9e9a36?, 0xc00190fc80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00158afd0?, 0x593fe4?, 0xc0018d8cb0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4803
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4183 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4182
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3864 [chan receive, 30 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001967ac0, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3862
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2637 [chan receive, 36 minutes]:
testing.(*T).Run(0xc001604d00, {0x2c3cf87?, 0x559473?}, 0x35da630)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001604d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001604d00, 0x35da438)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3431 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001966990, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00148bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019669c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014f9850, {0x3916f20, 0xc001dd38f0}, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014f9850, 0x3b9aca00, 0x0, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3459
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3650 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3585
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3243 [chan receive, 30 minutes]:
testing.(*T).Run(0xc00070d1e0, {0x2c3e385?, 0x0?}, 0xc0020b0300)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00070d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00070d1e0, 0xc00172a0c0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3242
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3924 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc000100700}, 0xc0015a8f50, 0xc00148ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc000100700}, 0xa0?, 0xc0015a8f50, 0xc0015a8f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc000100700?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593f85?, 0xc000be8180?, 0xc000bdb0a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3864
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4342 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00172a810, 0x3)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001adbd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00172a840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001c7a6f0, {0x3916f20, 0xc001578450}, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001c7a6f0, 0x3b9aca00, 0x0, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4309
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4802 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4769
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3248 [chan receive, 29 minutes]:
testing.(*T).Run(0xc00070da00, {0x2c3e385?, 0x0?}, 0xc0001a4900)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00070da00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00070da00, 0xc00172a240)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3242
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3576 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3575
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3330 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3265
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3244 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0009127d0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00070d380)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00070d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00070d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00070d380, 0xc00172a100)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3242
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3863 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3862
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3264 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001966690, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001591d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019666c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014f8630, {0x3916f20, 0xc001d6c090}, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014f8630, 0x3b9aca00, 0x0, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3319
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3811 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0005e1d50, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001593d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0005e1d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0022122d0, {0x3916f20, 0xc001f44210}, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0022122d0, 0x3b9aca00, 0x0, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3706
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4268 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001604680, {0x2c662c2?, 0xc000489570?}, 0xc00071ac00)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001604680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001604680, 0xc001d14200)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3245
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4136 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0014d71e0, {0x2c60db3?, 0xc00226ed70?}, 0xc001dc0100)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0014d71e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0014d71e0, 0xc0001a4900)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3248
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3585 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc000100700}, 0xc001483f50, 0xc001483f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc000100700}, 0x30?, 0xc001483f50, 0xc001483f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc000100700?}, 0x9e9a36?, 0xc0015ddb00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593f85?, 0xc001d2cc00?, 0xc0013f2930?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3639
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3638 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3623
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3639 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001534f80, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3623
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3584 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001534f50, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0008d3d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001534f80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001d89080, {0x3916f20, 0xc001a5ee70}, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001d89080, 0x3b9aca00, 0x0, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3639
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4033 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3965
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 6390 [IO wait]:
internal/poll.runtime_pollWait(0x7f522d63e750, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00071b900?, 0xc0013e8800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00071b900, {0xc0013e8800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc00071b900, {0xc0013e8800?, 0x10?, 0xc0000d28a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0015640d8, {0xc0013e8800?, 0xc0013e885f?, 0x70?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001a2a690, {0xc0013e8800?, 0x0?, 0xc001a2a690?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc001da89b8, {0x3917560, 0xc001a2a690})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001da8708, {0x7f522c33c2c0, 0xc0004d43d8}, 0xc0000d2a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001da8708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc001da8708, {0xc0017f0000, 0x1000, 0xc001a78380?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc000afff20, {0xc0004c7460, 0x9, 0x5168880?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3915620, 0xc000afff20}, {0xc0004c7460, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0004c7460, 0x9, 0x47b965?}, {0x3915620?, 0xc000afff20?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0004c7420)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0000d2fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000be8180)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 6389
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 4391 [IO wait]:
internal/poll.runtime_pollWait(0x7f522d63e120, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001764780?, 0xc000642000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001764780, {0xc000642000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc001764780, {0xc000642000?, 0x9d7032?, 0xc00170c9a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0006b2508, {0xc000642000?, 0xc000556360?, 0xc00064205f?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001a2a6f0, {0xc000642000?, 0x0?, 0xc001a2a6f0?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc001da82b8, {0x3917560, 0xc001a2a6f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001da8008, {0x3916a40, 0xc0006b2508}, 0xc00170ca10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001da8008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc001da8008, {0xc000a9c000, 0x1000, 0xc001a78380?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc0013fb6e0, {0xc001a342e0, 0x9, 0x5168880?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3915620, 0xc0013fb6e0}, {0xc001a342e0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001a342e0, 0x9, 0x47b965?}, {0x3915620?, 0xc0013fb6e0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc001a342a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00170cfa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00190e900)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 4390
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 3925 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3924
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 6212 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394fae8, 0xc0008a7630}, {0x3943440, 0xc0015a4560}, 0x1, 0x0, 0xc001e49b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394fb58?, 0xc000446f50?}, 0x3b9aca00, 0xc001463d38?, 0x1, 0xc001463b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394fb58, 0xc000446f50}, 0xc00066b6c0, {0xc001be6600, 0x12}, {0x2c60d4f, 0x14}, {0x2c76d8d, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x394fb58, 0xc000446f50}, 0xc00066b6c0, {0xc001be6600, 0x12}, {0x2c49543?, 0xc001b8cf60?}, {0x559473?, 0x4b186f?}, {0xc001848000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x125
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00066b6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00066b6c0, 0xc001dc0100)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4136
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4051 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001534410, 0x16)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001624d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001534440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00085e9a0, {0x3916f20, 0xc001f101e0}, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00085e9a0, 0x3b9aca00, 0x0, 0x1, 0xc000100700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4066
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4052 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc000100700}, 0xc002275f50, 0xc002275f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc000100700}, 0x6e?, 0xc002275f50, 0xc002275f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc000100700?}, 0x7273752f203e2d2d?, 0x632f65726168732f?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002275fd0?, 0x593fe4?, 0x3a38343a31312037?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4066
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4053 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4052
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4344 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4343
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                    

Test pass (169/222)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.04
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 13.53
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
22 TestOffline 110.04
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 135.78
31 TestAddons/serial/GCPAuth/Namespaces 0.15
34 TestAddons/parallel/Registry 17.17
36 TestAddons/parallel/InspektorGadget 12.12
39 TestAddons/parallel/CSI 56.59
40 TestAddons/parallel/Headlamp 19.03
41 TestAddons/parallel/CloudSpanner 6.58
42 TestAddons/parallel/LocalPath 56.43
43 TestAddons/parallel/NvidiaDevicePlugin 5.71
44 TestAddons/parallel/Yakd 12.03
46 TestCertOptions 76.24
49 TestForceSystemdFlag 73.92
50 TestForceSystemdEnv 47.09
52 TestKVMDriverInstallOrUpdate 5.14
56 TestErrorSpam/setup 42.75
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.74
59 TestErrorSpam/pause 1.62
60 TestErrorSpam/unpause 1.73
61 TestErrorSpam/stop 5.26
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 85.76
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 54.6
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.39
73 TestFunctional/serial/CacheCmd/cache/add_local 2.26
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 34.91
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.52
84 TestFunctional/serial/LogsFileCmd 1.51
85 TestFunctional/serial/InvalidService 4.24
87 TestFunctional/parallel/ConfigCmd 0.35
88 TestFunctional/parallel/DashboardCmd 38.45
89 TestFunctional/parallel/DryRun 0.3
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 1.06
95 TestFunctional/parallel/ServiceCmdConnect 11.64
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 50.36
99 TestFunctional/parallel/SSHCmd 0.4
100 TestFunctional/parallel/CpCmd 1.37
101 TestFunctional/parallel/MySQL 25.02
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.34
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
111 TestFunctional/parallel/License 0.71
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.71
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
118 TestFunctional/parallel/ImageCommands/ImageBuild 5.89
119 TestFunctional/parallel/ImageCommands/Setup 1.92
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.85
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.86
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
138 TestFunctional/parallel/ServiceCmd/List 0.47
139 TestFunctional/parallel/ProfileCmd/profile_list 0.46
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.4
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
143 TestFunctional/parallel/MountCmd/any-port 26.76
144 TestFunctional/parallel/ServiceCmd/Format 0.49
145 TestFunctional/parallel/ServiceCmd/URL 0.39
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
149 TestFunctional/parallel/MountCmd/specific-port 1.98
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.61
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 204.59
158 TestMultiControlPlane/serial/DeployApp 8.03
159 TestMultiControlPlane/serial/PingHostFromPods 1.29
160 TestMultiControlPlane/serial/AddWorkerNode 59.64
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
163 TestMultiControlPlane/serial/CopyFile 13.2
172 TestMultiControlPlane/serial/RestartCluster 236.77
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
174 TestMultiControlPlane/serial/AddSecondaryNode 77.5
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
179 TestJSONOutput/start/Command 59.75
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.71
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.64
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.33
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.2
207 TestMainNoArgs 0.05
208 TestMinikubeProfile 89.83
211 TestMountStart/serial/StartWithMountFirst 29.64
212 TestMountStart/serial/VerifyMountFirst 0.38
213 TestMountStart/serial/StartWithMountSecond 28.51
214 TestMountStart/serial/VerifyMountSecond 0.38
215 TestMountStart/serial/DeleteFirst 0.68
216 TestMountStart/serial/VerifyMountPostDelete 0.39
217 TestMountStart/serial/Stop 1.61
218 TestMountStart/serial/RestartStopped 23.53
219 TestMountStart/serial/VerifyMountPostStop 0.38
222 TestMultiNode/serial/FreshStart2Nodes 116.31
223 TestMultiNode/serial/DeployApp2Nodes 6.19
224 TestMultiNode/serial/PingHostFrom2Pods 0.83
225 TestMultiNode/serial/AddNode 49.53
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.6
228 TestMultiNode/serial/CopyFile 7.23
229 TestMultiNode/serial/StopNode 2.29
230 TestMultiNode/serial/StartAfterStop 38.76
232 TestMultiNode/serial/DeleteNode 2.11
234 TestMultiNode/serial/RestartMultiNode 182.37
235 TestMultiNode/serial/ValidateNameConflict 41.42
242 TestScheduledStopUnix 111.76
246 TestRunningBinaryUpgrade 236.95
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 97.09
253 TestNoKubernetes/serial/StartWithStopK8s 17.32
254 TestStoppedBinaryUpgrade/Setup 2.64
255 TestStoppedBinaryUpgrade/Upgrade 100.25
256 TestNoKubernetes/serial/Start 52.7
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
258 TestNoKubernetes/serial/ProfileList 1.9
259 TestNoKubernetes/serial/Stop 1.59
260 TestNoKubernetes/serial/StartNoArgs 39.39
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
262 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
282 TestPause/serial/Start 108.42
x
+
TestDownloadOnly/v1.20.0/json-events (27.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-052891 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-052891 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (27.035483429s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (27.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1007 10:22:04.889986   11096 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1007 10:22:04.890088   11096 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-052891
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-052891: exit status 85 (58.149101ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-052891 | jenkins | v1.34.0 | 07 Oct 24 10:21 UTC |          |
	|         | -p download-only-052891        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:21:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:21:37.895369   11108 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:21:37.895467   11108 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:21:37.895472   11108 out.go:358] Setting ErrFile to fd 2...
	I1007 10:21:37.895476   11108 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:21:37.895663   11108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	W1007 10:21:37.895796   11108 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19761-3912/.minikube/config/config.json: open /home/jenkins/minikube-integration/19761-3912/.minikube/config/config.json: no such file or directory
	I1007 10:21:37.896394   11108 out.go:352] Setting JSON to true
	I1007 10:21:37.897259   11108 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":192,"bootTime":1728296306,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:21:37.897364   11108 start.go:139] virtualization: kvm guest
	I1007 10:21:37.899837   11108 out.go:97] [download-only-052891] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1007 10:21:37.899961   11108 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball: no such file or directory
	I1007 10:21:37.900048   11108 notify.go:220] Checking for updates...
	I1007 10:21:37.901256   11108 out.go:169] MINIKUBE_LOCATION=19761
	I1007 10:21:37.902520   11108 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:21:37.903741   11108 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:21:37.904981   11108 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:21:37.906142   11108 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1007 10:21:37.908278   11108 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 10:21:37.908513   11108 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:21:38.016999   11108 out.go:97] Using the kvm2 driver based on user configuration
	I1007 10:21:38.017024   11108 start.go:297] selected driver: kvm2
	I1007 10:21:38.017031   11108 start.go:901] validating driver "kvm2" against <nil>
	I1007 10:21:38.017362   11108 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:21:38.017488   11108 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:21:38.032319   11108 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:21:38.032370   11108 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 10:21:38.032903   11108 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1007 10:21:38.033065   11108 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 10:21:38.033094   11108 cni.go:84] Creating CNI manager for ""
	I1007 10:21:38.033149   11108 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 10:21:38.033160   11108 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 10:21:38.033245   11108 start.go:340] cluster config:
	{Name:download-only-052891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-052891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:21:38.033472   11108 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:21:38.035581   11108 out.go:97] Downloading VM boot image ...
	I1007 10:21:38.035621   11108 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19761-3912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 10:21:48.461648   11108 out.go:97] Starting "download-only-052891" primary control-plane node in "download-only-052891" cluster
	I1007 10:21:48.461687   11108 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 10:21:48.571996   11108 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1007 10:21:48.572025   11108 cache.go:56] Caching tarball of preloaded images
	I1007 10:21:48.572181   11108 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 10:21:48.574432   11108 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1007 10:21:48.574452   11108 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1007 10:21:48.691537   11108 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-052891 host does not exist
	  To start a cluster, run: "minikube start -p download-only-052891"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-052891
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (13.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-484375 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-484375 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.531153124s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (13.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1007 10:22:18.741974   11096 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1007 10:22:18.742014   11096 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-484375
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-484375: exit status 85 (57.720981ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-052891 | jenkins | v1.34.0 | 07 Oct 24 10:21 UTC |                     |
	|         | -p download-only-052891        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC | 07 Oct 24 10:22 UTC |
	| delete  | -p download-only-052891        | download-only-052891 | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC | 07 Oct 24 10:22 UTC |
	| start   | -o=json --download-only        | download-only-484375 | jenkins | v1.34.0 | 07 Oct 24 10:22 UTC |                     |
	|         | -p download-only-484375        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:22:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:22:05.251797   11376 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:22:05.251901   11376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:22:05.251912   11376 out.go:358] Setting ErrFile to fd 2...
	I1007 10:22:05.251916   11376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:22:05.252131   11376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:22:05.252711   11376 out.go:352] Setting JSON to true
	I1007 10:22:05.253537   11376 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":219,"bootTime":1728296306,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:22:05.253630   11376 start.go:139] virtualization: kvm guest
	I1007 10:22:05.255903   11376 out.go:97] [download-only-484375] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:22:05.256052   11376 notify.go:220] Checking for updates...
	I1007 10:22:05.257334   11376 out.go:169] MINIKUBE_LOCATION=19761
	I1007 10:22:05.258579   11376 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:22:05.260163   11376 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:22:05.261649   11376 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:22:05.262922   11376 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1007 10:22:05.265431   11376 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 10:22:05.265697   11376 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:22:05.298709   11376 out.go:97] Using the kvm2 driver based on user configuration
	I1007 10:22:05.298747   11376 start.go:297] selected driver: kvm2
	I1007 10:22:05.298755   11376 start.go:901] validating driver "kvm2" against <nil>
	I1007 10:22:05.299101   11376 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:22:05.299236   11376 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19761-3912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 10:22:05.314795   11376 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 10:22:05.314871   11376 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 10:22:05.315427   11376 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1007 10:22:05.315587   11376 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 10:22:05.315615   11376 cni.go:84] Creating CNI manager for ""
	I1007 10:22:05.315688   11376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 10:22:05.315698   11376 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 10:22:05.315767   11376 start.go:340] cluster config:
	{Name:download-only-484375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-484375 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:22:05.315867   11376 iso.go:125] acquiring lock: {Name:mk894f62b1e70fe8556c552f818beb1e4be6fad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:22:05.317446   11376 out.go:97] Starting "download-only-484375" primary control-plane node in "download-only-484375" cluster
	I1007 10:22:05.317462   11376 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:22:05.915112   11376 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 10:22:05.915139   11376 cache.go:56] Caching tarball of preloaded images
	I1007 10:22:05.915296   11376 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:22:05.917103   11376 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1007 10:22:05.917127   11376 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1007 10:22:06.028637   11376 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19761-3912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-484375 host does not exist
	  To start a cluster, run: "minikube start -p download-only-484375"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-484375
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1007 10:22:19.312482   11096 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-079912 --alsologtostderr --binary-mirror http://127.0.0.1:43695 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-079912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-079912
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (110.04s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-749217 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-749217 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m49.173874106s)
helpers_test.go:175: Cleaning up "offline-crio-749217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-749217
--- PASS: TestOffline (110.04s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-681605
addons_test.go:934: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-681605: exit status 85 (52.149413ms)

                                                
                                                
-- stdout --
	* Profile "addons-681605" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-681605"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-681605
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-681605: exit status 85 (49.995138ms)

                                                
                                                
-- stdout --
	* Profile "addons-681605" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-681605"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (135.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-681605 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-681605 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m15.779182771s)
--- PASS: TestAddons/Setup (135.78s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-681605 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-681605 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.278852ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-j5b9g" [16a6aecf-e13b-4534-83e7-70fdf57bd954] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006317594s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tr9b7" [2c257dda-ca4a-4383-904e-6a600fa871bd] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005381477s
addons_test.go:331: (dbg) Run:  kubectl --context addons-681605 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-681605 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-681605 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.387605917s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 ip
2024/10/07 10:33:04 [DEBUG] GET http://192.168.39.71:5000
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.12s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zdh2j" [d69ccefc-0472-410c-8e41-a653a5bbfe83] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004882092s
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-681605 addons disable inspektor-gadget --alsologtostderr -v=1: (6.114689182s)
--- PASS: TestAddons/parallel/InspektorGadget (12.12s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1007 10:33:12.221320   11096 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1007 10:33:12.240779   11096 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1007 10:33:12.240814   11096 kapi.go:107] duration metric: took 19.508841ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 19.520168ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-681605 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-681605 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a81aeb95-720c-44d1-8bc5-e6964bd1dbd4] Pending
helpers_test.go:344: "task-pv-pod" [a81aeb95-720c-44d1-8bc5-e6964bd1dbd4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a81aeb95-720c-44d1-8bc5-e6964bd1dbd4] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004219178s
addons_test.go:511: (dbg) Run:  kubectl --context addons-681605 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-681605 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-681605 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-681605 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-681605 delete pod task-pv-pod: (1.633941735s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-681605 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-681605 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-681605 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8fce54f2-fd28-44c3-a9b2-ad5444cab831] Pending
helpers_test.go:344: "task-pv-pod-restore" [8fce54f2-fd28-44c3-a9b2-ad5444cab831] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8fce54f2-fd28-44c3-a9b2-ad5444cab831] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005091725s
addons_test.go:553: (dbg) Run:  kubectl --context addons-681605 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-681605 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-681605 delete volumesnapshot new-snapshot-demo
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-681605 addons disable volumesnapshots --alsologtostderr -v=1: (1.038011447s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-681605 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.021059442s)
--- PASS: TestAddons/parallel/CSI (56.59s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-681605 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-xhks6" [04f09f47-a50d-4884-a7d0-fb1911300873] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-xhks6" [04f09f47-a50d-4884-a7d0-fb1911300873] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-xhks6" [04f09f47-a50d-4884-a7d0-fb1911300873] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004270906s
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 addons disable headlamp --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-681605 addons disable headlamp --alsologtostderr -v=1: (6.116984269s)
--- PASS: TestAddons/parallel/Headlamp (19.03s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-92tpz" [19caeed9-05c0-47d3-a315-23ca78aef135] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004348948s
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:883: (dbg) Run:  kubectl --context addons-681605 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:889: (dbg) Run:  kubectl --context addons-681605 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:893: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681605 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fb84cacd-7ae8-498f-b440-66ce0253cca7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fb84cacd-7ae8-498f-b440-66ce0253cca7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fb84cacd-7ae8-498f-b440-66ce0253cca7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003554761s
addons_test.go:901: (dbg) Run:  kubectl --context addons-681605 get pvc test-pvc -o=json
addons_test.go:910: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 ssh "cat /opt/local-path-provisioner/pvc-44bb06b3-65c8-40a0-8efe-d6acb8e8851b_default_test-pvc/file1"
addons_test.go:922: (dbg) Run:  kubectl --context addons-681605 delete pod test-local-path
addons_test.go:926: (dbg) Run:  kubectl --context addons-681605 delete pvc test-pvc
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-681605 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.417462195s)
--- PASS: TestAddons/parallel/LocalPath (56.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5qr65" [50ebff62-241e-44a1-a190-cbc7791e17c6] Running
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.008219009s
addons_test.go:961: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-681605
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-trw5s" [edea7e73-6b0e-48b3-b89b-3bc006962e13] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004303318s
addons_test.go:973: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 addons disable yakd --alsologtostderr -v=1
addons_test.go:973: (dbg) Done: out/minikube-linux-amd64 -p addons-681605 addons disable yakd --alsologtostderr -v=1: (6.025119752s)
--- PASS: TestAddons/parallel/Yakd (12.03s)

                                                
                                    
x
+
TestCertOptions (76.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-495675 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-495675 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m14.792603592s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-495675 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-495675 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-495675 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-495675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-495675
--- PASS: TestCertOptions (76.24s)

                                                
                                    
x
+
TestForceSystemdFlag (73.92s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-468078 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1007 11:45:08.250756   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-468078 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m12.722709691s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-468078 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-468078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-468078
--- PASS: TestForceSystemdFlag (73.92s)

                                                
                                    
x
+
TestForceSystemdEnv (47.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-264062 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-264062 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.282658734s)
helpers_test.go:175: Cleaning up "force-systemd-env-264062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-264062
--- PASS: TestForceSystemdEnv (47.09s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.14s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1007 11:44:44.129965   11096 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 11:44:44.130127   11096 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1007 11:44:44.162306   11096 install.go:62] docker-machine-driver-kvm2: exit status 1
W1007 11:44:44.162727   11096 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1007 11:44:44.162799   11096 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2961900629/001/docker-machine-driver-kvm2
I1007 11:44:44.408785   11096 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2961900629/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60] Decompressors:map[bz2:0xc00053c1a0 gz:0xc00053c1a8 tar:0xc00053c150 tar.bz2:0xc00053c160 tar.gz:0xc00053c170 tar.xz:0xc00053c180 tar.zst:0xc00053c190 tbz2:0xc00053c160 tgz:0xc00053c170 txz:0xc00053c180 tzst:0xc00053c190 xz:0xc00053c1b0 zip:0xc00053c1c0 zst:0xc00053c1b8] Getters:map[file:0xc000ae6d80 http:0xc00071caa0 https:0xc00071caf0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1007 11:44:44.408826   11096 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2961900629/001/docker-machine-driver-kvm2
I1007 11:44:47.327951   11096 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 11:44:47.328065   11096 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1007 11:44:47.357366   11096 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1007 11:44:47.357393   11096 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1007 11:44:47.357445   11096 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1007 11:44:47.357476   11096 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2961900629/002/docker-machine-driver-kvm2
I1007 11:44:47.416959   11096 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2961900629/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60] Decompressors:map[bz2:0xc00053c1a0 gz:0xc00053c1a8 tar:0xc00053c150 tar.bz2:0xc00053c160 tar.gz:0xc00053c170 tar.xz:0xc00053c180 tar.zst:0xc00053c190 tbz2:0xc00053c160 tgz:0xc00053c170 txz:0xc00053c180 tzst:0xc00053c190 xz:0xc00053c1b0 zip:0xc00053c1c0 zst:0xc00053c1b8] Getters:map[file:0xc000ae7c00 http:0xc0008a6820 https:0xc0008a6870] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1007 11:44:47.417001   11096 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2961900629/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.14s)

                                                
                                    
x
+
TestErrorSpam/setup (42.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-041520 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-041520 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-041520 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-041520 --driver=kvm2  --container-runtime=crio: (42.749129747s)
--- PASS: TestErrorSpam/setup (42.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (5.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 stop: (2.313549293s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 stop: (1.474722035s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-041520 --log_dir /tmp/nospam-041520 stop: (1.469511147s)
--- PASS: TestErrorSpam/stop (5.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19761-3912/.minikube/files/etc/test/nested/copy/11096/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.76s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-382950 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-382950 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m25.763924781s)
--- PASS: TestFunctional/serial/StartWithProxy (85.76s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1007 10:43:22.217830   11096 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-382950 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-382950 --alsologtostderr -v=8: (54.602349381s)
functional_test.go:663: soft start took 54.603132557s for "functional-382950" cluster.
I1007 10:44:16.820544   11096 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (54.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-382950 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-382950 cache add registry.k8s.io/pause:3.1: (1.095965908s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-382950 cache add registry.k8s.io/pause:3.3: (1.179374174s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-382950 cache add registry.k8s.io/pause:latest: (1.118169733s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-382950 /tmp/TestFunctionalserialCacheCmdcacheadd_local3480219730/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 cache add minikube-local-cache-test:functional-382950
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-382950 cache add minikube-local-cache-test:functional-382950: (1.92837579s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 cache delete minikube-local-cache-test:functional-382950
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-382950
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382950 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (210.86281ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-382950 cache reload: (1.02943571s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 kubectl -- --context functional-382950 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-382950 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-382950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1007 10:44:36.380738   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:44:36.387216   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:44:36.398614   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:44:36.420028   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:44:36.461459   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:44:36.542883   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:44:36.704427   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:44:37.026155   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:44:37.668296   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:44:38.949995   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:44:41.513014   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:44:46.634421   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:44:56.876751   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-382950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.907962505s)
functional_test.go:761: restart took 34.908084874s for "functional-382950" cluster.
I1007 10:44:59.882095   11096 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-382950 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-382950 logs: (1.5192465s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 logs --file /tmp/TestFunctionalserialLogsFileCmd463177820/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-382950 logs --file /tmp/TestFunctionalserialLogsFileCmd463177820/001/logs.txt: (1.505285043s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-382950 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-382950
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-382950: exit status 115 (274.565805ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.107:30757 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-382950 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382950 config get cpus: exit status 14 (56.434978ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382950 config get cpus: exit status 14 (53.937747ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (38.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-382950 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-382950 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22568: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (38.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-382950 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-382950 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (142.829987ms)

                                                
                                                
-- stdout --
	* [functional-382950] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 10:45:20.776652   22271 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:45:20.776794   22271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:45:20.776806   22271 out.go:358] Setting ErrFile to fd 2...
	I1007 10:45:20.776812   22271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:45:20.777101   22271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:45:20.777822   22271 out.go:352] Setting JSON to false
	I1007 10:45:20.779079   22271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1615,"bootTime":1728296306,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:45:20.779203   22271 start.go:139] virtualization: kvm guest
	I1007 10:45:20.781494   22271 out.go:177] * [functional-382950] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 10:45:20.782940   22271 notify.go:220] Checking for updates...
	I1007 10:45:20.782969   22271 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:45:20.784446   22271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:45:20.785969   22271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:45:20.787377   22271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:45:20.788676   22271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:45:20.789979   22271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:45:20.791806   22271 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:45:20.792258   22271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:45:20.792318   22271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:45:20.807685   22271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36043
	I1007 10:45:20.808175   22271 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:45:20.808764   22271 main.go:141] libmachine: Using API Version  1
	I1007 10:45:20.808787   22271 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:45:20.809175   22271 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:45:20.809375   22271 main.go:141] libmachine: (functional-382950) Calling .DriverName
	I1007 10:45:20.809625   22271 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:45:20.809961   22271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:45:20.810007   22271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:45:20.824515   22271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41915
	I1007 10:45:20.824958   22271 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:45:20.825502   22271 main.go:141] libmachine: Using API Version  1
	I1007 10:45:20.825532   22271 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:45:20.825823   22271 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:45:20.825985   22271 main.go:141] libmachine: (functional-382950) Calling .DriverName
	I1007 10:45:20.859389   22271 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 10:45:20.860920   22271 start.go:297] selected driver: kvm2
	I1007 10:45:20.860933   22271 start.go:901] validating driver "kvm2" against &{Name:functional-382950 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-382950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:45:20.861035   22271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:45:20.863333   22271 out.go:201] 
	W1007 10:45:20.864709   22271 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1007 10:45:20.866078   22271 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-382950 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-382950 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-382950 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.287995ms)

                                                
                                                
-- stdout --
	* [functional-382950] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 10:45:21.078068   22346 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:45:21.078186   22346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:45:21.078191   22346 out.go:358] Setting ErrFile to fd 2...
	I1007 10:45:21.078195   22346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:45:21.078437   22346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 10:45:21.078953   22346 out.go:352] Setting JSON to false
	I1007 10:45:21.079833   22346 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1615,"bootTime":1728296306,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 10:45:21.079933   22346 start.go:139] virtualization: kvm guest
	I1007 10:45:21.082365   22346 out.go:177] * [functional-382950] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1007 10:45:21.084068   22346 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:45:21.084119   22346 notify.go:220] Checking for updates...
	I1007 10:45:21.086668   22346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:45:21.088166   22346 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	I1007 10:45:21.093141   22346 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	I1007 10:45:21.094477   22346 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 10:45:21.095886   22346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:45:21.097913   22346 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:45:21.098502   22346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:45:21.098597   22346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:45:21.113994   22346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36069
	I1007 10:45:21.114446   22346 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:45:21.115089   22346 main.go:141] libmachine: Using API Version  1
	I1007 10:45:21.115115   22346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:45:21.115464   22346 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:45:21.115675   22346 main.go:141] libmachine: (functional-382950) Calling .DriverName
	I1007 10:45:21.115917   22346 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:45:21.116233   22346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 10:45:21.116279   22346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 10:45:21.131078   22346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1007 10:45:21.131577   22346 main.go:141] libmachine: () Calling .GetVersion
	I1007 10:45:21.132211   22346 main.go:141] libmachine: Using API Version  1
	I1007 10:45:21.132235   22346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 10:45:21.132588   22346 main.go:141] libmachine: () Calling .GetMachineName
	I1007 10:45:21.132763   22346 main.go:141] libmachine: (functional-382950) Calling .DriverName
	I1007 10:45:21.164549   22346 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1007 10:45:21.165955   22346 start.go:297] selected driver: kvm2
	I1007 10:45:21.165968   22346 start.go:901] validating driver "kvm2" against &{Name:functional-382950 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-382950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:45:21.166074   22346 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:45:21.168263   22346 out.go:201] 
	W1007 10:45:21.169641   22346 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1007 10:45:21.170903   22346 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-382950 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-382950 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-vpchh" [3ca2acc1-ba8b-42c3-9401-ba14801daac4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-vpchh" [3ca2acc1-ba8b-42c3-9401-ba14801daac4] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004901624s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.107:32680
functional_test.go:1675: http://192.168.39.107:32680: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-vpchh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.107:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.107:32680
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [59e379a2-f308-421d-86e6-48d9b40dbe7b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003992689s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-382950 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-382950 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-382950 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-382950 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [04f89c35-2fff-4506-aca0-21cfc24729a3] Pending
helpers_test.go:344: "sp-pod" [04f89c35-2fff-4506-aca0-21cfc24729a3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [04f89c35-2fff-4506-aca0-21cfc24729a3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.272042191s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-382950 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-382950 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-382950 delete -f testdata/storage-provisioner/pod.yaml: (6.216951041s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-382950 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d2f58f9a-311e-4a7d-8013-4e6c5c900e43] Pending
helpers_test.go:344: "sp-pod" [d2f58f9a-311e-4a7d-8013-4e6c5c900e43] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d2f58f9a-311e-4a7d-8013-4e6c5c900e43] Running
E1007 10:45:58.320810   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.011228566s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-382950 exec sp-pod -- ls /tmp/mount
2024/10/07 10:45:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.36s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh -n functional-382950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 cp functional-382950:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1618960931/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh -n functional-382950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh -n functional-382950 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-382950 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-rmth2" [4f72dd76-f4a0-4442-8af4-0fe0c71ea360] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-rmth2" [4f72dd76-f4a0-4442-8af4-0fe0c71ea360] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.004267694s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-382950 exec mysql-6cdb49bbb-rmth2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-382950 exec mysql-6cdb49bbb-rmth2 -- mysql -ppassword -e "show databases;": exit status 1 (144.000179ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1007 10:45:44.007939   11096 retry.go:31] will retry after 1.028489108s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-382950 exec mysql-6cdb49bbb-rmth2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-382950 exec mysql-6cdb49bbb-rmth2 -- mysql -ppassword -e "show databases;": exit status 1 (135.648361ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1007 10:45:45.172420   11096 retry.go:31] will retry after 1.340857067s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-382950 exec mysql-6cdb49bbb-rmth2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/11096/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "sudo cat /etc/test/nested/copy/11096/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/11096.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "sudo cat /etc/ssl/certs/11096.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/11096.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "sudo cat /usr/share/ca-certificates/11096.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/110962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "sudo cat /etc/ssl/certs/110962.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/110962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "sudo cat /usr/share/ca-certificates/110962.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-382950 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382950 ssh "sudo systemctl is-active docker": exit status 1 (204.122148ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382950 ssh "sudo systemctl is-active containerd": exit status 1 (206.939353ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-382950 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-382950
localhost/kicbase/echo-server:functional-382950
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-382950 image ls --format short --alsologtostderr:
I1007 10:45:47.480117   22927 out.go:345] Setting OutFile to fd 1 ...
I1007 10:45:47.480314   22927 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:45:47.480329   22927 out.go:358] Setting ErrFile to fd 2...
I1007 10:45:47.480337   22927 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:45:47.480683   22927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
I1007 10:45:47.481773   22927 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:45:47.481934   22927 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:45:47.482467   22927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 10:45:47.482532   22927 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 10:45:47.498170   22927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44615
I1007 10:45:47.498658   22927 main.go:141] libmachine: () Calling .GetVersion
I1007 10:45:47.499258   22927 main.go:141] libmachine: Using API Version  1
I1007 10:45:47.499290   22927 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 10:45:47.499675   22927 main.go:141] libmachine: () Calling .GetMachineName
I1007 10:45:47.499894   22927 main.go:141] libmachine: (functional-382950) Calling .GetState
I1007 10:45:47.501657   22927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 10:45:47.501708   22927 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 10:45:47.516503   22927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
I1007 10:45:47.517024   22927 main.go:141] libmachine: () Calling .GetVersion
I1007 10:45:47.517515   22927 main.go:141] libmachine: Using API Version  1
I1007 10:45:47.517538   22927 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 10:45:47.517894   22927 main.go:141] libmachine: () Calling .GetMachineName
I1007 10:45:47.518132   22927 main.go:141] libmachine: (functional-382950) Calling .DriverName
I1007 10:45:47.518373   22927 ssh_runner.go:195] Run: systemctl --version
I1007 10:45:47.518402   22927 main.go:141] libmachine: (functional-382950) Calling .GetSSHHostname
I1007 10:45:47.521023   22927 main.go:141] libmachine: (functional-382950) DBG | domain functional-382950 has defined MAC address 52:54:00:ad:c2:50 in network mk-functional-382950
I1007 10:45:47.521421   22927 main.go:141] libmachine: (functional-382950) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:c2:50", ip: ""} in network mk-functional-382950: {Iface:virbr1 ExpiryTime:2024-10-07 11:42:11 +0000 UTC Type:0 Mac:52:54:00:ad:c2:50 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-382950 Clientid:01:52:54:00:ad:c2:50}
I1007 10:45:47.521459   22927 main.go:141] libmachine: (functional-382950) DBG | domain functional-382950 has defined IP address 192.168.39.107 and MAC address 52:54:00:ad:c2:50 in network mk-functional-382950
I1007 10:45:47.521562   22927 main.go:141] libmachine: (functional-382950) Calling .GetSSHPort
I1007 10:45:47.521729   22927 main.go:141] libmachine: (functional-382950) Calling .GetSSHKeyPath
I1007 10:45:47.521907   22927 main.go:141] libmachine: (functional-382950) Calling .GetSSHUsername
I1007 10:45:47.522027   22927 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/functional-382950/id_rsa Username:docker}
I1007 10:45:47.609873   22927 ssh_runner.go:195] Run: sudo crictl images --output json
I1007 10:45:47.662429   22927 main.go:141] libmachine: Making call to close driver server
I1007 10:45:47.662445   22927 main.go:141] libmachine: (functional-382950) Calling .Close
I1007 10:45:47.662701   22927 main.go:141] libmachine: Successfully made call to close driver server
I1007 10:45:47.662722   22927 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 10:45:47.662736   22927 main.go:141] libmachine: Making call to close driver server
I1007 10:45:47.662745   22927 main.go:141] libmachine: (functional-382950) Calling .Close
I1007 10:45:47.662743   22927 main.go:141] libmachine: (functional-382950) DBG | Closing plugin on server side
I1007 10:45:47.662966   22927 main.go:141] libmachine: Successfully made call to close driver server
I1007 10:45:47.662984   22927 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 10:45:47.663007   22927 main.go:141] libmachine: (functional-382950) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-382950 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/kicbase/echo-server           | functional-382950  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | latest             | 7f553e8bbc897 | 196MB  |
| localhost/minikube-local-cache-test     | functional-382950  | 84d5087222f1d | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-382950 image ls --format table --alsologtostderr:
I1007 10:45:51.015545   23347 out.go:345] Setting OutFile to fd 1 ...
I1007 10:45:51.015654   23347 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:45:51.015663   23347 out.go:358] Setting ErrFile to fd 2...
I1007 10:45:51.015667   23347 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:45:51.015830   23347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
I1007 10:45:51.016444   23347 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:45:51.016541   23347 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:45:51.016874   23347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 10:45:51.016916   23347 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 10:45:51.031621   23347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
I1007 10:45:51.032162   23347 main.go:141] libmachine: () Calling .GetVersion
I1007 10:45:51.032774   23347 main.go:141] libmachine: Using API Version  1
I1007 10:45:51.032793   23347 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 10:45:51.033129   23347 main.go:141] libmachine: () Calling .GetMachineName
I1007 10:45:51.033313   23347 main.go:141] libmachine: (functional-382950) Calling .GetState
I1007 10:45:51.035382   23347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 10:45:51.035423   23347 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 10:45:51.051064   23347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35363
I1007 10:45:51.051504   23347 main.go:141] libmachine: () Calling .GetVersion
I1007 10:45:51.052024   23347 main.go:141] libmachine: Using API Version  1
I1007 10:45:51.052053   23347 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 10:45:51.052420   23347 main.go:141] libmachine: () Calling .GetMachineName
I1007 10:45:51.052638   23347 main.go:141] libmachine: (functional-382950) Calling .DriverName
I1007 10:45:51.052826   23347 ssh_runner.go:195] Run: systemctl --version
I1007 10:45:51.052847   23347 main.go:141] libmachine: (functional-382950) Calling .GetSSHHostname
I1007 10:45:51.055854   23347 main.go:141] libmachine: (functional-382950) DBG | domain functional-382950 has defined MAC address 52:54:00:ad:c2:50 in network mk-functional-382950
I1007 10:45:51.056332   23347 main.go:141] libmachine: (functional-382950) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:c2:50", ip: ""} in network mk-functional-382950: {Iface:virbr1 ExpiryTime:2024-10-07 11:42:11 +0000 UTC Type:0 Mac:52:54:00:ad:c2:50 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-382950 Clientid:01:52:54:00:ad:c2:50}
I1007 10:45:51.056355   23347 main.go:141] libmachine: (functional-382950) DBG | domain functional-382950 has defined IP address 192.168.39.107 and MAC address 52:54:00:ad:c2:50 in network mk-functional-382950
I1007 10:45:51.056552   23347 main.go:141] libmachine: (functional-382950) Calling .GetSSHPort
I1007 10:45:51.056767   23347 main.go:141] libmachine: (functional-382950) Calling .GetSSHKeyPath
I1007 10:45:51.056958   23347 main.go:141] libmachine: (functional-382950) Calling .GetSSHUsername
I1007 10:45:51.057102   23347 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/functional-382950/id_rsa Username:docker}
I1007 10:45:51.176809   23347 ssh_runner.go:195] Run: sudo crictl images --output json
I1007 10:45:51.280208   23347 main.go:141] libmachine: Making call to close driver server
I1007 10:45:51.280229   23347 main.go:141] libmachine: (functional-382950) Calling .Close
I1007 10:45:51.280503   23347 main.go:141] libmachine: Successfully made call to close driver server
I1007 10:45:51.280521   23347 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 10:45:51.280531   23347 main.go:141] libmachine: (functional-382950) DBG | Closing plugin on server side
I1007 10:45:51.280534   23347 main.go:141] libmachine: Making call to close driver server
I1007 10:45:51.280617   23347 main.go:141] libmachine: (functional-382950) Calling .Close
I1007 10:45:51.280847   23347 main.go:141] libmachine: Successfully made call to close driver server
I1007 10:45:51.280888   23347 main.go:141] libmachine: (functional-382950) DBG | Closing plugin on server side
I1007 10:45:51.280894   23347 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-382950 image ls --format json --alsologtostderr:
[{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"12968670680f4561ef68187823
91eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0","repoDigests":["docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":
"195818028"},{"id":"84d5087222f1d9a9f37f4a4670d8e1bc5f8b5392c03b5443aa30c267222ad974","repoDigests":["localhost/minikube-local-cache-test@sha256:ce1cd08c848af3858dd9b86a9e85e263b9fd384808bbde492c0fb0ef24cde831"],"repoTags":["localhost/minikube-local-cache-test:functional-382950"],"size":"3330"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"si
ze":"68420934"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e
04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-382950"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37f
ef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registr
y.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-382950 image ls --format json --alsologtostderr:
I1007 10:45:50.745773   23324 out.go:345] Setting OutFile to fd 1 ...
I1007 10:45:50.745868   23324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:45:50.745872   23324 out.go:358] Setting ErrFile to fd 2...
I1007 10:45:50.745877   23324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:45:50.746066   23324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
I1007 10:45:50.746631   23324 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:45:50.746722   23324 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:45:50.747089   23324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 10:45:50.747126   23324 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 10:45:50.762154   23324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37845
I1007 10:45:50.762718   23324 main.go:141] libmachine: () Calling .GetVersion
I1007 10:45:50.763229   23324 main.go:141] libmachine: Using API Version  1
I1007 10:45:50.763251   23324 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 10:45:50.763577   23324 main.go:141] libmachine: () Calling .GetMachineName
I1007 10:45:50.763750   23324 main.go:141] libmachine: (functional-382950) Calling .GetState
I1007 10:45:50.765763   23324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 10:45:50.765800   23324 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 10:45:50.780543   23324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
I1007 10:45:50.780979   23324 main.go:141] libmachine: () Calling .GetVersion
I1007 10:45:50.781495   23324 main.go:141] libmachine: Using API Version  1
I1007 10:45:50.781520   23324 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 10:45:50.781802   23324 main.go:141] libmachine: () Calling .GetMachineName
I1007 10:45:50.781968   23324 main.go:141] libmachine: (functional-382950) Calling .DriverName
I1007 10:45:50.782156   23324 ssh_runner.go:195] Run: systemctl --version
I1007 10:45:50.782177   23324 main.go:141] libmachine: (functional-382950) Calling .GetSSHHostname
I1007 10:45:50.784801   23324 main.go:141] libmachine: (functional-382950) DBG | domain functional-382950 has defined MAC address 52:54:00:ad:c2:50 in network mk-functional-382950
I1007 10:45:50.785134   23324 main.go:141] libmachine: (functional-382950) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:c2:50", ip: ""} in network mk-functional-382950: {Iface:virbr1 ExpiryTime:2024-10-07 11:42:11 +0000 UTC Type:0 Mac:52:54:00:ad:c2:50 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-382950 Clientid:01:52:54:00:ad:c2:50}
I1007 10:45:50.785166   23324 main.go:141] libmachine: (functional-382950) DBG | domain functional-382950 has defined IP address 192.168.39.107 and MAC address 52:54:00:ad:c2:50 in network mk-functional-382950
I1007 10:45:50.785251   23324 main.go:141] libmachine: (functional-382950) Calling .GetSSHPort
I1007 10:45:50.785424   23324 main.go:141] libmachine: (functional-382950) Calling .GetSSHKeyPath
I1007 10:45:50.785589   23324 main.go:141] libmachine: (functional-382950) Calling .GetSSHUsername
I1007 10:45:50.785721   23324 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/functional-382950/id_rsa Username:docker}
I1007 10:45:50.898680   23324 ssh_runner.go:195] Run: sudo crictl images --output json
I1007 10:45:50.953803   23324 main.go:141] libmachine: Making call to close driver server
I1007 10:45:50.953825   23324 main.go:141] libmachine: (functional-382950) Calling .Close
I1007 10:45:50.954101   23324 main.go:141] libmachine: Successfully made call to close driver server
I1007 10:45:50.954121   23324 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 10:45:50.954137   23324 main.go:141] libmachine: Making call to close driver server
I1007 10:45:50.954145   23324 main.go:141] libmachine: (functional-382950) Calling .Close
I1007 10:45:50.954342   23324 main.go:141] libmachine: (functional-382950) DBG | Closing plugin on server side
I1007 10:45:50.954372   23324 main.go:141] libmachine: Successfully made call to close driver server
I1007 10:45:50.954384   23324 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-382950 image ls --format yaml --alsologtostderr:
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-382950
size: "4943877"
- id: 84d5087222f1d9a9f37f4a4670d8e1bc5f8b5392c03b5443aa30c267222ad974
repoDigests:
- localhost/minikube-local-cache-test@sha256:ce1cd08c848af3858dd9b86a9e85e263b9fd384808bbde492c0fb0ef24cde831
repoTags:
- localhost/minikube-local-cache-test:functional-382950
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0
repoDigests:
- docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "195818028"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-382950 image ls --format yaml --alsologtostderr:
I1007 10:45:47.719851   22950 out.go:345] Setting OutFile to fd 1 ...
I1007 10:45:47.719956   22950 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:45:47.719963   22950 out.go:358] Setting ErrFile to fd 2...
I1007 10:45:47.719968   22950 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:45:47.720187   22950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
I1007 10:45:47.720803   22950 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:45:47.720912   22950 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:45:47.721299   22950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 10:45:47.721352   22950 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 10:45:47.736255   22950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
I1007 10:45:47.736759   22950 main.go:141] libmachine: () Calling .GetVersion
I1007 10:45:47.737344   22950 main.go:141] libmachine: Using API Version  1
I1007 10:45:47.737370   22950 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 10:45:47.737740   22950 main.go:141] libmachine: () Calling .GetMachineName
I1007 10:45:47.737904   22950 main.go:141] libmachine: (functional-382950) Calling .GetState
I1007 10:45:47.739899   22950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 10:45:47.739946   22950 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 10:45:47.755091   22950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
I1007 10:45:47.755598   22950 main.go:141] libmachine: () Calling .GetVersion
I1007 10:45:47.756236   22950 main.go:141] libmachine: Using API Version  1
I1007 10:45:47.756269   22950 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 10:45:47.756715   22950 main.go:141] libmachine: () Calling .GetMachineName
I1007 10:45:47.756918   22950 main.go:141] libmachine: (functional-382950) Calling .DriverName
I1007 10:45:47.757152   22950 ssh_runner.go:195] Run: systemctl --version
I1007 10:45:47.757211   22950 main.go:141] libmachine: (functional-382950) Calling .GetSSHHostname
I1007 10:45:47.759832   22950 main.go:141] libmachine: (functional-382950) DBG | domain functional-382950 has defined MAC address 52:54:00:ad:c2:50 in network mk-functional-382950
I1007 10:45:47.760289   22950 main.go:141] libmachine: (functional-382950) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:c2:50", ip: ""} in network mk-functional-382950: {Iface:virbr1 ExpiryTime:2024-10-07 11:42:11 +0000 UTC Type:0 Mac:52:54:00:ad:c2:50 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-382950 Clientid:01:52:54:00:ad:c2:50}
I1007 10:45:47.760319   22950 main.go:141] libmachine: (functional-382950) DBG | domain functional-382950 has defined IP address 192.168.39.107 and MAC address 52:54:00:ad:c2:50 in network mk-functional-382950
I1007 10:45:47.760488   22950 main.go:141] libmachine: (functional-382950) Calling .GetSSHPort
I1007 10:45:47.760616   22950 main.go:141] libmachine: (functional-382950) Calling .GetSSHKeyPath
I1007 10:45:47.760755   22950 main.go:141] libmachine: (functional-382950) Calling .GetSSHUsername
I1007 10:45:47.760890   22950 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/functional-382950/id_rsa Username:docker}
I1007 10:45:47.857304   22950 ssh_runner.go:195] Run: sudo crictl images --output json
I1007 10:45:47.996534   22950 main.go:141] libmachine: Making call to close driver server
I1007 10:45:47.996551   22950 main.go:141] libmachine: (functional-382950) Calling .Close
I1007 10:45:47.996839   22950 main.go:141] libmachine: Successfully made call to close driver server
I1007 10:45:47.996857   22950 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 10:45:47.996876   22950 main.go:141] libmachine: Making call to close driver server
I1007 10:45:47.996890   22950 main.go:141] libmachine: (functional-382950) Calling .Close
I1007 10:45:47.997166   22950 main.go:141] libmachine: Successfully made call to close driver server
I1007 10:45:47.997181   22950 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382950 ssh pgrep buildkitd: exit status 1 (255.19609ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image build -t localhost/my-image:functional-382950 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-382950 image build -t localhost/my-image:functional-382950 testdata/build --alsologtostderr: (5.405252265s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-382950 image build -t localhost/my-image:functional-382950 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cf4680fd1a4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-382950
--> 3bc77951234
Successfully tagged localhost/my-image:functional-382950
3bc7795123454eb24a663d2146d84f22673e2c0a16800ab242886afd955c4e86
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-382950 image build -t localhost/my-image:functional-382950 testdata/build --alsologtostderr:
I1007 10:45:48.312225   23043 out.go:345] Setting OutFile to fd 1 ...
I1007 10:45:48.312398   23043 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:45:48.312409   23043 out.go:358] Setting ErrFile to fd 2...
I1007 10:45:48.312416   23043 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:45:48.312680   23043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
I1007 10:45:48.313439   23043 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:45:48.314072   23043 config.go:182] Loaded profile config "functional-382950": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:45:48.314628   23043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 10:45:48.314684   23043 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 10:45:48.329512   23043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33555
I1007 10:45:48.330017   23043 main.go:141] libmachine: () Calling .GetVersion
I1007 10:45:48.330532   23043 main.go:141] libmachine: Using API Version  1
I1007 10:45:48.330553   23043 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 10:45:48.330918   23043 main.go:141] libmachine: () Calling .GetMachineName
I1007 10:45:48.331089   23043 main.go:141] libmachine: (functional-382950) Calling .GetState
I1007 10:45:48.333260   23043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 10:45:48.333353   23043 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 10:45:48.349229   23043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
I1007 10:45:48.349672   23043 main.go:141] libmachine: () Calling .GetVersion
I1007 10:45:48.350198   23043 main.go:141] libmachine: Using API Version  1
I1007 10:45:48.350223   23043 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 10:45:48.350608   23043 main.go:141] libmachine: () Calling .GetMachineName
I1007 10:45:48.350771   23043 main.go:141] libmachine: (functional-382950) Calling .DriverName
I1007 10:45:48.350964   23043 ssh_runner.go:195] Run: systemctl --version
I1007 10:45:48.350989   23043 main.go:141] libmachine: (functional-382950) Calling .GetSSHHostname
I1007 10:45:48.353920   23043 main.go:141] libmachine: (functional-382950) DBG | domain functional-382950 has defined MAC address 52:54:00:ad:c2:50 in network mk-functional-382950
I1007 10:45:48.354326   23043 main.go:141] libmachine: (functional-382950) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:c2:50", ip: ""} in network mk-functional-382950: {Iface:virbr1 ExpiryTime:2024-10-07 11:42:11 +0000 UTC Type:0 Mac:52:54:00:ad:c2:50 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-382950 Clientid:01:52:54:00:ad:c2:50}
I1007 10:45:48.354354   23043 main.go:141] libmachine: (functional-382950) DBG | domain functional-382950 has defined IP address 192.168.39.107 and MAC address 52:54:00:ad:c2:50 in network mk-functional-382950
I1007 10:45:48.354495   23043 main.go:141] libmachine: (functional-382950) Calling .GetSSHPort
I1007 10:45:48.354649   23043 main.go:141] libmachine: (functional-382950) Calling .GetSSHKeyPath
I1007 10:45:48.354771   23043 main.go:141] libmachine: (functional-382950) Calling .GetSSHUsername
I1007 10:45:48.354903   23043 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/functional-382950/id_rsa Username:docker}
I1007 10:45:48.456008   23043 build_images.go:161] Building image from path: /tmp/build.27422736.tar
I1007 10:45:48.456067   23043 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1007 10:45:48.472646   23043 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.27422736.tar
I1007 10:45:48.478194   23043 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.27422736.tar: stat -c "%s %y" /var/lib/minikube/build/build.27422736.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.27422736.tar': No such file or directory
I1007 10:45:48.478246   23043 ssh_runner.go:362] scp /tmp/build.27422736.tar --> /var/lib/minikube/build/build.27422736.tar (3072 bytes)
I1007 10:45:48.510759   23043 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.27422736
I1007 10:45:48.522550   23043 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.27422736 -xf /var/lib/minikube/build/build.27422736.tar
I1007 10:45:48.533774   23043 crio.go:315] Building image: /var/lib/minikube/build/build.27422736
I1007 10:45:48.533836   23043 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-382950 /var/lib/minikube/build/build.27422736 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1007 10:45:53.635067   23043 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-382950 /var/lib/minikube/build/build.27422736 --cgroup-manager=cgroupfs: (5.101209012s)
I1007 10:45:53.635135   23043 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.27422736
I1007 10:45:53.648453   23043 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.27422736.tar
I1007 10:45:53.659010   23043 build_images.go:217] Built localhost/my-image:functional-382950 from /tmp/build.27422736.tar
I1007 10:45:53.659050   23043 build_images.go:133] succeeded building to: functional-382950
I1007 10:45:53.659054   23043 build_images.go:134] failed building to: 
I1007 10:45:53.659093   23043 main.go:141] libmachine: Making call to close driver server
I1007 10:45:53.659107   23043 main.go:141] libmachine: (functional-382950) Calling .Close
I1007 10:45:53.659400   23043 main.go:141] libmachine: Successfully made call to close driver server
I1007 10:45:53.659428   23043 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 10:45:53.659440   23043 main.go:141] libmachine: Making call to close driver server
I1007 10:45:53.659450   23043 main.go:141] libmachine: (functional-382950) Calling .Close
I1007 10:45:53.659404   23043 main.go:141] libmachine: (functional-382950) DBG | Closing plugin on server side
I1007 10:45:53.659644   23043 main.go:141] libmachine: (functional-382950) DBG | Closing plugin on server side
I1007 10:45:53.659661   23043 main.go:141] libmachine: Successfully made call to close driver server
I1007 10:45:53.659668   23043 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.898689956s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-382950
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-382950 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-382950 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-kdgc5" [e3618080-5dab-4806-9ef8-ccfaf055a9ce] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-kdgc5" [e3618080-5dab-4806-9ef8-ccfaf055a9ce] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004619659s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image load --daemon kicbase/echo-server:functional-382950 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-382950 image load --daemon kicbase/echo-server:functional-382950 --alsologtostderr: (2.609186474s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image load --daemon kicbase/echo-server:functional-382950 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-382950
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image load --daemon kicbase/echo-server:functional-382950 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image save kicbase/echo-server:functional-382950 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image rm kicbase/echo-server:functional-382950 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-382950
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 image save --daemon kicbase/echo-server:functional-382950 --alsologtostderr
E1007 10:45:17.358805   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-382950
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "411.338996ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.673938ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 service list -o json
functional_test.go:1494: Took "395.133049ms" to run "out/minikube-linux-amd64 -p functional-382950 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "356.140643ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.410598ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.107:32326
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (26.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-382950 /tmp/TestFunctionalparallelMountCmdany-port3136483733/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728297920363935532" to /tmp/TestFunctionalparallelMountCmdany-port3136483733/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728297920363935532" to /tmp/TestFunctionalparallelMountCmdany-port3136483733/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728297920363935532" to /tmp/TestFunctionalparallelMountCmdany-port3136483733/001/test-1728297920363935532
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382950 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.092719ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 10:45:20.637300   11096 retry.go:31] will retry after 560.093769ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  7 10:45 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  7 10:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  7 10:45 test-1728297920363935532
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh cat /mount-9p/test-1728297920363935532
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-382950 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [06e80b4b-0eb0-402d-95cd-7ec4fcd50726] Pending
helpers_test.go:344: "busybox-mount" [06e80b4b-0eb0-402d-95cd-7ec4fcd50726] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [06e80b4b-0eb0-402d-95cd-7ec4fcd50726] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [06e80b4b-0eb0-402d-95cd-7ec4fcd50726] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 24.003504738s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-382950 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-382950 /tmp/TestFunctionalparallelMountCmdany-port3136483733/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (26.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.107:32326
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-382950 /tmp/TestFunctionalparallelMountCmdspecific-port1806646265/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382950 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.78271ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 10:45:47.383604   11096 retry.go:31] will retry after 611.963292ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-382950 /tmp/TestFunctionalparallelMountCmdspecific-port1806646265/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382950 ssh "sudo umount -f /mount-9p": exit status 1 (229.445052ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-382950 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-382950 /tmp/TestFunctionalparallelMountCmdspecific-port1806646265/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-382950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081488092/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-382950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081488092/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-382950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081488092/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382950 ssh "findmnt -T" /mount1: exit status 1 (260.205507ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 10:45:49.361545   11096 retry.go:31] will retry after 593.213559ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-382950 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-382950 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-382950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081488092/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-382950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081488092/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-382950 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4081488092/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-382950
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-382950
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-382950
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-406505 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1007 10:47:20.243148   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-406505 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m23.923469761s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (204.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-406505 -- rollout status deployment/busybox: (5.852778581s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-bjz2q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-ktkg9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-tzgjx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-bjz2q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-ktkg9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-tzgjx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-bjz2q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-ktkg9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-tzgjx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-bjz2q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-bjz2q -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-ktkg9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-ktkg9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-tzgjx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-406505 -- exec busybox-7dff88458-tzgjx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-406505 -v=7 --alsologtostderr
E1007 10:49:36.380910   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:04.085515   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:08.250199   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:08.256659   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:08.268092   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:08.289527   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:08.330983   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:08.412445   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:08.574055   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:08.895567   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:09.536893   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:10.818214   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:13.379767   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:18.501281   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:50:28.742948   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-406505 -v=7 --alsologtostderr: (58.752678724s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-406505 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp testdata/cp-test.txt ha-406505:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505:/home/docker/cp-test.txt ha-406505-m02:/home/docker/cp-test_ha-406505_ha-406505-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m02 "sudo cat /home/docker/cp-test_ha-406505_ha-406505-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505:/home/docker/cp-test.txt ha-406505-m03:/home/docker/cp-test_ha-406505_ha-406505-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m03 "sudo cat /home/docker/cp-test_ha-406505_ha-406505-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505:/home/docker/cp-test.txt ha-406505-m04:/home/docker/cp-test_ha-406505_ha-406505-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m04 "sudo cat /home/docker/cp-test_ha-406505_ha-406505-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp testdata/cp-test.txt ha-406505-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505-m02:/home/docker/cp-test.txt ha-406505:/home/docker/cp-test_ha-406505-m02_ha-406505.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505 "sudo cat /home/docker/cp-test_ha-406505-m02_ha-406505.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505-m02:/home/docker/cp-test.txt ha-406505-m03:/home/docker/cp-test_ha-406505-m02_ha-406505-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m03 "sudo cat /home/docker/cp-test_ha-406505-m02_ha-406505-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505-m02:/home/docker/cp-test.txt ha-406505-m04:/home/docker/cp-test_ha-406505-m02_ha-406505-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m04 "sudo cat /home/docker/cp-test_ha-406505-m02_ha-406505-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp testdata/cp-test.txt ha-406505-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt ha-406505:/home/docker/cp-test_ha-406505-m03_ha-406505.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505 "sudo cat /home/docker/cp-test_ha-406505-m03_ha-406505.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt ha-406505-m02:/home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m02 "sudo cat /home/docker/cp-test_ha-406505-m03_ha-406505-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505-m03:/home/docker/cp-test.txt ha-406505-m04:/home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m04 "sudo cat /home/docker/cp-test_ha-406505-m03_ha-406505-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp testdata/cp-test.txt ha-406505-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2665267876/001/cp-test_ha-406505-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt ha-406505:/home/docker/cp-test_ha-406505-m04_ha-406505.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505 "sudo cat /home/docker/cp-test_ha-406505-m04_ha-406505.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt ha-406505-m02:/home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m02 "sudo cat /home/docker/cp-test_ha-406505-m04_ha-406505-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 cp ha-406505-m04:/home/docker/cp-test.txt ha-406505-m03:/home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 ssh -n ha-406505-m03 "sudo cat /home/docker/cp-test_ha-406505-m04_ha-406505-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (236.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-406505 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1007 11:10:08.250677   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-406505 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m56.01230632s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (236.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-406505 --control-plane -v=7 --alsologtostderr
E1007 11:14:36.381360   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-406505 --control-plane -v=7 --alsologtostderr: (1m16.639414752s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-406505 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-236862 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1007 11:15:08.250666   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-236862 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (59.744839309s)
--- PASS: TestJSONOutput/start/Command (59.75s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-236862 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-236862 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-236862 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-236862 --output=json --user=testUser: (7.334191881s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-390528 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-390528 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.310292ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"997dc94f-c202-416c-8b1f-64a32ac0be8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-390528] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"440513d8-a71b-4c67-bb45-3a3f496c179a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19761"}}
	{"specversion":"1.0","id":"d4aa7468-bf55-49ae-aa03-c0cadac14396","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1051e751-53aa-4ec4-bfcb-74d1bd303944","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig"}}
	{"specversion":"1.0","id":"529b89c6-378c-4744-917e-f31b824d446f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube"}}
	{"specversion":"1.0","id":"aaf0d937-d9aa-4183-a08c-2dd8b188b014","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0e383db2-629a-4eca-a541-fa440d6cc55d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bc7b5df0-450c-47d1-82a6-d4e8bb846667","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-390528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-390528
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (89.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-786166 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-786166 --driver=kvm2  --container-runtime=crio: (46.065257593s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-798229 --driver=kvm2  --container-runtime=crio
E1007 11:17:39.448517   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-798229 --driver=kvm2  --container-runtime=crio: (40.890589036s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-786166
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-798229
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-798229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-798229
helpers_test.go:175: Cleaning up "first-786166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-786166
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-786166: (1.003632964s)
--- PASS: TestMinikubeProfile (89.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-445231 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-445231 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.636475616s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-445231 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-445231 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-456101 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-456101 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.511551215s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-456101 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-456101 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-445231 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-456101 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-456101 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-456101
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-456101: (1.606808949s)
--- PASS: TestMountStart/serial/Stop (1.61s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.53s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-456101
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-456101: (22.534611494s)
--- PASS: TestMountStart/serial/RestartStopped (23.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-456101 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-456101 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-873106 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1007 11:19:36.381091   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:20:08.250752   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-873106 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.907716118s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-873106 -- rollout status deployment/busybox: (4.667284394s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- exec busybox-7dff88458-4jmrf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- exec busybox-7dff88458-kcfrp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- exec busybox-7dff88458-4jmrf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- exec busybox-7dff88458-kcfrp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- exec busybox-7dff88458-4jmrf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- exec busybox-7dff88458-kcfrp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.19s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- exec busybox-7dff88458-4jmrf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- exec busybox-7dff88458-4jmrf -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- exec busybox-7dff88458-kcfrp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-873106 -- exec busybox-7dff88458-kcfrp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-873106 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-873106 -v 3 --alsologtostderr: (48.952699899s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.53s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-873106 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 cp testdata/cp-test.txt multinode-873106:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 cp multinode-873106:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2112677138/001/cp-test_multinode-873106.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 cp multinode-873106:/home/docker/cp-test.txt multinode-873106-m02:/home/docker/cp-test_multinode-873106_multinode-873106-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106-m02 "sudo cat /home/docker/cp-test_multinode-873106_multinode-873106-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 cp multinode-873106:/home/docker/cp-test.txt multinode-873106-m03:/home/docker/cp-test_multinode-873106_multinode-873106-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106-m03 "sudo cat /home/docker/cp-test_multinode-873106_multinode-873106-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 cp testdata/cp-test.txt multinode-873106-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 cp multinode-873106-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2112677138/001/cp-test_multinode-873106-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 cp multinode-873106-m02:/home/docker/cp-test.txt multinode-873106:/home/docker/cp-test_multinode-873106-m02_multinode-873106.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106 "sudo cat /home/docker/cp-test_multinode-873106-m02_multinode-873106.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 cp multinode-873106-m02:/home/docker/cp-test.txt multinode-873106-m03:/home/docker/cp-test_multinode-873106-m02_multinode-873106-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106-m03 "sudo cat /home/docker/cp-test_multinode-873106-m02_multinode-873106-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 cp testdata/cp-test.txt multinode-873106-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 cp multinode-873106-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2112677138/001/cp-test_multinode-873106-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 cp multinode-873106-m03:/home/docker/cp-test.txt multinode-873106:/home/docker/cp-test_multinode-873106-m03_multinode-873106.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106 "sudo cat /home/docker/cp-test_multinode-873106-m03_multinode-873106.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 cp multinode-873106-m03:/home/docker/cp-test.txt multinode-873106-m02:/home/docker/cp-test_multinode-873106-m03_multinode-873106-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 ssh -n multinode-873106-m02 "sudo cat /home/docker/cp-test_multinode-873106-m03_multinode-873106-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-873106 node stop m03: (1.431813308s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-873106 status: exit status 7 (427.388815ms)

                                                
                                                
-- stdout --
	multinode-873106
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-873106-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-873106-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-873106 status --alsologtostderr: exit status 7 (434.441222ms)

                                                
                                                
-- stdout --
	multinode-873106
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-873106-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-873106-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 11:22:14.374299   42045 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:22:14.374390   42045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:22:14.374397   42045 out.go:358] Setting ErrFile to fd 2...
	I1007 11:22:14.374402   42045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:22:14.374611   42045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-3912/.minikube/bin
	I1007 11:22:14.374767   42045 out.go:352] Setting JSON to false
	I1007 11:22:14.374793   42045 mustload.go:65] Loading cluster: multinode-873106
	I1007 11:22:14.374926   42045 notify.go:220] Checking for updates...
	I1007 11:22:14.375167   42045 config.go:182] Loaded profile config "multinode-873106": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:22:14.375183   42045 status.go:174] checking status of multinode-873106 ...
	I1007 11:22:14.375612   42045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:22:14.375650   42045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:22:14.394913   42045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36967
	I1007 11:22:14.395323   42045 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:22:14.395912   42045 main.go:141] libmachine: Using API Version  1
	I1007 11:22:14.395940   42045 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:22:14.396320   42045 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:22:14.396533   42045 main.go:141] libmachine: (multinode-873106) Calling .GetState
	I1007 11:22:14.398258   42045 status.go:371] multinode-873106 host status = "Running" (err=<nil>)
	I1007 11:22:14.398275   42045 host.go:66] Checking if "multinode-873106" exists ...
	I1007 11:22:14.398727   42045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:22:14.398774   42045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:22:14.414360   42045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43445
	I1007 11:22:14.414811   42045 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:22:14.415331   42045 main.go:141] libmachine: Using API Version  1
	I1007 11:22:14.415353   42045 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:22:14.415732   42045 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:22:14.415933   42045 main.go:141] libmachine: (multinode-873106) Calling .GetIP
	I1007 11:22:14.418486   42045 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:22:14.418910   42045 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:22:14.418946   42045 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:22:14.419062   42045 host.go:66] Checking if "multinode-873106" exists ...
	I1007 11:22:14.419366   42045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:22:14.419404   42045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:22:14.434836   42045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I1007 11:22:14.435319   42045 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:22:14.435826   42045 main.go:141] libmachine: Using API Version  1
	I1007 11:22:14.435846   42045 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:22:14.436177   42045 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:22:14.436376   42045 main.go:141] libmachine: (multinode-873106) Calling .DriverName
	I1007 11:22:14.436586   42045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:22:14.436624   42045 main.go:141] libmachine: (multinode-873106) Calling .GetSSHHostname
	I1007 11:22:14.439182   42045 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:22:14.439625   42045 main.go:141] libmachine: (multinode-873106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:df:7f", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:19:26 +0000 UTC Type:0 Mac:52:54:00:37:df:7f Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:multinode-873106 Clientid:01:52:54:00:37:df:7f}
	I1007 11:22:14.439642   42045 main.go:141] libmachine: (multinode-873106) DBG | domain multinode-873106 has defined IP address 192.168.39.51 and MAC address 52:54:00:37:df:7f in network mk-multinode-873106
	I1007 11:22:14.439804   42045 main.go:141] libmachine: (multinode-873106) Calling .GetSSHPort
	I1007 11:22:14.439995   42045 main.go:141] libmachine: (multinode-873106) Calling .GetSSHKeyPath
	I1007 11:22:14.440150   42045 main.go:141] libmachine: (multinode-873106) Calling .GetSSHUsername
	I1007 11:22:14.440311   42045 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/multinode-873106/id_rsa Username:docker}
	I1007 11:22:14.523359   42045 ssh_runner.go:195] Run: systemctl --version
	I1007 11:22:14.529707   42045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:22:14.546100   42045 kubeconfig.go:125] found "multinode-873106" server: "https://192.168.39.51:8443"
	I1007 11:22:14.546144   42045 api_server.go:166] Checking apiserver status ...
	I1007 11:22:14.546215   42045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 11:22:14.561663   42045 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	W1007 11:22:14.571471   42045 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1007 11:22:14.571540   42045 ssh_runner.go:195] Run: ls
	I1007 11:22:14.576349   42045 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I1007 11:22:14.580563   42045 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I1007 11:22:14.580583   42045 status.go:463] multinode-873106 apiserver status = Running (err=<nil>)
	I1007 11:22:14.580591   42045 status.go:176] multinode-873106 status: &{Name:multinode-873106 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 11:22:14.580610   42045 status.go:174] checking status of multinode-873106-m02 ...
	I1007 11:22:14.580916   42045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:22:14.580954   42045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:22:14.596165   42045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41549
	I1007 11:22:14.596703   42045 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:22:14.597189   42045 main.go:141] libmachine: Using API Version  1
	I1007 11:22:14.597242   42045 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:22:14.597608   42045 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:22:14.597784   42045 main.go:141] libmachine: (multinode-873106-m02) Calling .GetState
	I1007 11:22:14.599416   42045 status.go:371] multinode-873106-m02 host status = "Running" (err=<nil>)
	I1007 11:22:14.599442   42045 host.go:66] Checking if "multinode-873106-m02" exists ...
	I1007 11:22:14.599868   42045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:22:14.599914   42045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:22:14.618309   42045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I1007 11:22:14.618697   42045 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:22:14.619139   42045 main.go:141] libmachine: Using API Version  1
	I1007 11:22:14.619157   42045 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:22:14.619484   42045 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:22:14.619630   42045 main.go:141] libmachine: (multinode-873106-m02) Calling .GetIP
	I1007 11:22:14.622477   42045 main.go:141] libmachine: (multinode-873106-m02) DBG | domain multinode-873106-m02 has defined MAC address 52:54:00:44:2a:24 in network mk-multinode-873106
	I1007 11:22:14.622880   42045 main.go:141] libmachine: (multinode-873106-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:2a:24", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:20:31 +0000 UTC Type:0 Mac:52:54:00:44:2a:24 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-873106-m02 Clientid:01:52:54:00:44:2a:24}
	I1007 11:22:14.622902   42045 main.go:141] libmachine: (multinode-873106-m02) DBG | domain multinode-873106-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:44:2a:24 in network mk-multinode-873106
	I1007 11:22:14.623019   42045 host.go:66] Checking if "multinode-873106-m02" exists ...
	I1007 11:22:14.623367   42045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:22:14.623407   42045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:22:14.639375   42045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39823
	I1007 11:22:14.639907   42045 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:22:14.640443   42045 main.go:141] libmachine: Using API Version  1
	I1007 11:22:14.640474   42045 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:22:14.640813   42045 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:22:14.641017   42045 main.go:141] libmachine: (multinode-873106-m02) Calling .DriverName
	I1007 11:22:14.641187   42045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:22:14.641204   42045 main.go:141] libmachine: (multinode-873106-m02) Calling .GetSSHHostname
	I1007 11:22:14.644118   42045 main.go:141] libmachine: (multinode-873106-m02) DBG | domain multinode-873106-m02 has defined MAC address 52:54:00:44:2a:24 in network mk-multinode-873106
	I1007 11:22:14.644616   42045 main.go:141] libmachine: (multinode-873106-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:2a:24", ip: ""} in network mk-multinode-873106: {Iface:virbr1 ExpiryTime:2024-10-07 12:20:31 +0000 UTC Type:0 Mac:52:54:00:44:2a:24 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-873106-m02 Clientid:01:52:54:00:44:2a:24}
	I1007 11:22:14.644642   42045 main.go:141] libmachine: (multinode-873106-m02) DBG | domain multinode-873106-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:44:2a:24 in network mk-multinode-873106
	I1007 11:22:14.644803   42045 main.go:141] libmachine: (multinode-873106-m02) Calling .GetSSHPort
	I1007 11:22:14.644962   42045 main.go:141] libmachine: (multinode-873106-m02) Calling .GetSSHKeyPath
	I1007 11:22:14.645081   42045 main.go:141] libmachine: (multinode-873106-m02) Calling .GetSSHUsername
	I1007 11:22:14.645246   42045 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19761-3912/.minikube/machines/multinode-873106-m02/id_rsa Username:docker}
	I1007 11:22:14.731047   42045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:22:14.745564   42045 status.go:176] multinode-873106-m02 status: &{Name:multinode-873106-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1007 11:22:14.745603   42045 status.go:174] checking status of multinode-873106-m03 ...
	I1007 11:22:14.745937   42045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:22:14.745985   42045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:22:14.761135   42045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34271
	I1007 11:22:14.761694   42045 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:22:14.762241   42045 main.go:141] libmachine: Using API Version  1
	I1007 11:22:14.762261   42045 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:22:14.762586   42045 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:22:14.762779   42045 main.go:141] libmachine: (multinode-873106-m03) Calling .GetState
	I1007 11:22:14.764302   42045 status.go:371] multinode-873106-m03 host status = "Stopped" (err=<nil>)
	I1007 11:22:14.764315   42045 status.go:384] host is not running, skipping remaining checks
	I1007 11:22:14.764320   42045 status.go:176] multinode-873106-m03 status: &{Name:multinode-873106-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-873106 node start m03 -v=7 --alsologtostderr: (38.139922582s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-873106 node delete m03: (1.598891202s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (182.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-873106 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-873106 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.838710405s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-873106 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (182.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-873106
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-873106-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-873106-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.025003ms)

                                                
                                                
-- stdout --
	* [multinode-873106-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-873106-m02' is duplicated with machine name 'multinode-873106-m02' in profile 'multinode-873106'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-873106-m03 --driver=kvm2  --container-runtime=crio
E1007 11:34:19.449933   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-873106-m03 --driver=kvm2  --container-runtime=crio: (40.129754199s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-873106
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-873106: exit status 80 (215.51862ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-873106 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-873106-m03 already exists in multinode-873106-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-873106-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.42s)

                                                
                                    
x
+
TestScheduledStopUnix (111.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-836694 --memory=2048 --driver=kvm2  --container-runtime=crio
E1007 11:39:36.381295   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/addons-681605/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-836694 --memory=2048 --driver=kvm2  --container-runtime=crio: (40.173965699s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-836694 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-836694 -n scheduled-stop-836694
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-836694 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1007 11:39:47.653885   11096 retry.go:31] will retry after 134.836µs: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.655051   11096 retry.go:31] will retry after 154.903µs: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.656201   11096 retry.go:31] will retry after 116.745µs: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.657330   11096 retry.go:31] will retry after 461.509µs: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.658486   11096 retry.go:31] will retry after 482.474µs: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.659636   11096 retry.go:31] will retry after 702.804µs: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.660747   11096 retry.go:31] will retry after 752.914µs: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.661879   11096 retry.go:31] will retry after 1.597553ms: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.664055   11096 retry.go:31] will retry after 3.656076ms: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.668254   11096 retry.go:31] will retry after 5.661396ms: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.674470   11096 retry.go:31] will retry after 7.112382ms: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.682702   11096 retry.go:31] will retry after 10.391109ms: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.693928   11096 retry.go:31] will retry after 12.434565ms: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
I1007 11:39:47.707162   11096 retry.go:31] will retry after 28.493424ms: open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/scheduled-stop-836694/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-836694 --cancel-scheduled
E1007 11:39:51.322594   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:40:08.250491   11096 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-3912/.minikube/profiles/functional-382950/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-836694 -n scheduled-stop-836694
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-836694
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-836694 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-836694
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-836694: exit status 7 (65.811222ms)

                                                
                                                
-- stdout --
	scheduled-stop-836694
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-836694 -n scheduled-stop-836694
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-836694 -n scheduled-stop-836694: exit status 7 (63.108139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-836694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-836694
--- PASS: TestScheduledStopUnix (111.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (236.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3155807308 start -p running-upgrade-056919 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3155807308 start -p running-upgrade-056919 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m5.409006794s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-056919 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-056919 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m48.021746994s)
helpers_test.go:175: Cleaning up "running-upgrade-056919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-056919
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-056919: (1.291425928s)
--- PASS: TestRunningBinaryUpgrade (236.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836294 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-836294 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (81.360924ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-836294] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-3912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-3912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836294 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-836294 --driver=kvm2  --container-runtime=crio: (1m36.829022883s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-836294 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836294 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-836294 --no-kubernetes --driver=kvm2  --container-runtime=crio: (16.272666511s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-836294 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-836294 status -o json: exit status 2 (230.878368ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-836294","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-836294
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4097677721 start -p stopped-upgrade-926250 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4097677721 start -p stopped-upgrade-926250 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (53.352230602s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4097677721 -p stopped-upgrade-926250 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4097677721 -p stopped-upgrade-926250 stop: (2.129032336s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-926250 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-926250 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.764367224s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (100.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836294 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-836294 --no-kubernetes --driver=kvm2  --container-runtime=crio: (52.699842879s)
--- PASS: TestNoKubernetes/serial/Start (52.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-836294 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-836294 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.331484ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.083143113s)
--- PASS: TestNoKubernetes/serial/ProfileList (1.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-836294
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-836294: (1.59206409s)
--- PASS: TestNoKubernetes/serial/Stop (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (39.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-836294 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-836294 --driver=kvm2  --container-runtime=crio: (39.385342198s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (39.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-836294 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-836294 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.128995ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-926250
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestPause/serial/Start (108.42s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-328632 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-328632 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m48.420283226s)
--- PASS: TestPause/serial/Start (108.42s)

                                                
                                    

Test skip (34/222)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-681605 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard